October 27, 2022
Was Sherlock intuitively enamored or simply sick with influenza?, by Howard Rankin
Will intuition machines turn artificial stupidity into true artificial intelligence?, by Michael Hentschel
Can we program intuition?, by Grant Renier
Was Sherlock intuitively enamored or simply sick with influenza?
“I’m always so impressed by your intuition, Sherlock,” Watson remarked as he and Holmes finished their meals.
“I don’t know what you mean by the word ‘Intuition’” the detective replied.
“It is true that sometimes, without conscious activity, I remember an event that might be salient. It springs from my subconscious, as if some coordinated activity is going on there, of which I’m not aware. However, just because it has been delivered into my awareness by my subconscious doesn’t automatically make it relevant or meaningful. I need rational enquiry to confirm its validity. So, just because it has appeared mysteriously in my consciousness doesn’t make it valid on its own.”
“But Sherlock, sometimes these appearing notions come with physical sensations. Surely that proves they are more than random ideas?” questioned Watson.
“Watson, do you recall last autumn when you met that lady from the southern end of
Baker Street? You told me that you were enamored of her, because during the encounter you felt you’re heart racing and sweat emanating from the pores of your skin.”
Watson nodded, or perhaps he was holding his head in shame.
“Your physical sensations of excitement became the basis of your attraction. Until the following day, when you discovered that you had a case of influenza, and that the symptoms you had interpreted as love were in fact, a nasty bug.”
Intuition is a complex and potentially confusing concept. Arising from the subconscious, a thought or a feeling magically engages our attention. Sometimes, this is an important subconscious dynamic. We can’t consciously access every aspect of our existence and much of it lurks below the surface. There will be times when an idea or a sensation suddenly appears in your head. The issue is what is the meaning of the notion. It could be part of a memory that has been repressed. I once had a client who had been assaulted in a park. Consciously, she could not recall the details but I had been able to do so during a course of hypnotherapy. She told me on one occasion that as she was walking into a restaurant, she saw some tulips outside. She suddenly freaked out, and had the intuition she should avoid the place like the plague. She quickly left the scene. I knew but she didn’t, that tulips were abundant at the site of her assault. Sometimes, intuition is simply our subconscious working faster than our consciousness. For
example, when removing your hand from a hot stove, the muscular movements occur before your conscious awareness of the need to do that action.
We should listen to our intuition and then, in matters of importance, investigate the possible interpretations with critical thinking before deciding whether it is delivering important information or random associations that have no real significance.
by Howard Rankin PhD, psychology and cognitive neuroscience
Will intuition machines turn artificial stupidity into true artificial intelligence?
Human Intuition involves subconscious probabilities of future event, but it's difficult to explain it in detail. It may be that we recognize rational probabilities subconsciously, and are willing to act irrationally on the hunch, where a purely rational computer might “fear” to tread. Of course, we program-in that fear to prevent machines running amok. Intuition can be rational and irrational, and we aren’t sure of how much of either goes into our own decisions. Maybe expert computers can show us WHAT we did, but won’t know WHY we did it.
Machines are much better at rationally tracking and explaining decisions, because that’s what we teach them. What if we teach them our human intuitive rationality and irrationality? Is that even possible if we don’t fully understand it ourselves? If we try to teach AI machines, is that dose of our irrationality even a good thing? Arguably, we could just tell them, the machines will turn it all into what we call “Intuitive Rationality”. The result should be a
comprehensive simulation of the best human traits (to be encouraged) and the worst human traits (to be discouraged). In the meantime, untrained machines are designed and kept artificially stupid.
Can Machines learn our Intuition? Machines have always been our most effective tools to extend our capabilities, and now hardware with software is morphing into AI. Our newest machine-learning algorithms simulate our own learning talents, and speed them up to almost limitless levels, hopefully resulting in enhanced intelligence rather than enhanced stupidity. But traffic navigation intelligence is certainly already exceeding our own driving-safety standards in terms of reliable alertness, if not yet in the final quality of all human sensors and decisions.
Can Machines learn our Creativity? Can creativity exist without intuition? The relationship between Intuition and Creativity may not be direct causation, but there is a relationship and interdependence. Intuition is predictive in a somewhat mysterious process that has no obvious precedent, and Creativity is unpredictable in another mysterious process of combining elements in unprecedented ways. All this mystery may make it doubly unlikely that we can teach our machines intuition and creativity, but AI art-design platforms like Dall-E and Stable Diffusion are increasingly programmed to “behave” in “creating” unprecedented and unexpected images and videos from the totality of possible elements drawn from the entire internet. Remaining distinctions may not matter.
Can Machines learn our Productivity? Have they not already learned that with their own vast contributions to our Productivity? Leave aside for the moment that we may not trust our own productivity, as we irrationally impede our own human success with so much self-defeating stupidity. But we have also taught our own machines elements of our own stupidity, keeping them in the dark about more intuitive and creative ways to achieve productivity. The more we can properly teach our machine tools to multiply our wisdom-based successes, the more productive the world can be, and the closer we can come to maximizing the wealth-creation of our vast existing resources. This is not merely exploitative, it is smarter.
by Michael Hentschel, Yale and Kellogg anthropology, economics, venture capital
Can we program intuition?
Our humanized AI acts strangely! Sometimes we can anticipate its next 'alert' about a future event; other times not. And, it's been decades since we were last able to deconstruct the reason and logic of why it output an alert. Similar to our own subconscious, we can no longer retrace its reason for its actions. And, this is not just for the investment markets, but for all of its applications to-date, like sports, elections, health, opinion, random numbers and more.
Are we now uncomfortably staring in the face of a humanized AI? Is its Intuitive Rationality really simulating a kind of human intuition? Should we now start calling it Jim, Bob, Alice or Jane? Lets deconstruct this!
Wikipedia says this kind of general intelligence is the “. . . hypothetical ability of an intelligent agent to understand or learn any intellectual task that a human being can".
While it's an easily stated and lofty goal, it is the left side of the computer/human equation - a computer system that can appear to be smart and challenge human intelligence. The right side of the equation - the experiential dynamics of human behavior and intuitive decision-making that actually determine our reality - gets little mention in public presentation, discussion and debate. No doubt sometime in the not-so-distant future, we will observe AI applications that seem to exhibit sentience, self-awareness and consciousness. However, we are convinced that such systems will generally be one step behind its human competitors in the dynamics (an important qualifier) of intuitive thought, creativity and the human ability to perceive new and
unique possibilities - the right side of the equation. The qualifier is, 'if it walks like a duck . . ." It's our opinion that this computer/human equation will always be a duck with one lame leg.
Daniel Kahneman, Nobel prize winner and the father of behavioral economics, and his partner, Amos Tversky, explain how two systems divide our brain and constantly fight for control of our behavior and actions. The chart, above, shows their basic function: intuition versus rationality. One is super fast and efficient; the other is slow and lazy, i.e., "To hell with it. I think what I've got is good enough!" Our human/computer equation maps nicely: rational thinking = computer (conscious) and intuitive thinking = human (subconscious). Humanized AI does the rational work, a glutton that doesn't get tired, and attempts the intuitive work, as it simulates its twelve biases that we have explain in prior magazine issues. The system digests these every-changing biases into a singular Intuition - a sense of the future - for all the data sources that it sees, not only after each new data event, but for 150 sequential future events. And, this is where humanized AI has the edge in the human/computer competition. It's faster, it consumes vast amounts of data, and it never sleeps. It may demonstrate intuition. But the quality of it will always lack the uniqueness of the human conceptual, creative experience.
by Grant Renier, engineering, mathematics, behavioral science, economics
This content is not for publication
©Intuality Inc 2022-2024 ALL RIGHTS RESERVED