Predictions are happening all the time even if we’re unaware of them, by Howard Rankin
Are we the “Humanized AI” we have been looking for?, by Michael Hentschel
“Do you believe that the acrobat will successfully walk across the wire again?”, by Marguerite Beyer
✋Visit IntualityAI Magazine Issues (password is intuality821)
|
|
|
Predictions are happening all the time even if we’re unaware of them
|
"Surely our profession, Mr. Mac, would be a drab and sordid one if we did not sometimes set the scene so as to glorify our results…. The quick inference, the subtle trap, the clever forecast of coming events, the triumphant vindication of bold theories -- are these not the pride and the justification of our life's work?" Sherlock Holmes said in The Valley of Fear
As Holmes implies, any correct forecast of events is clever, because it is correct and, as Niels Bohr famously said, “prediction is easy unless it’s about the future.”
Consciously, we might engage in the predictions business when considering investments, or entering a casino, or watching NFL games.
In these situations, we try to consciously calculate through whatever means at our disposal, how to predict the
|
|
|
|
outcomes of events, with the possibility that we can benefit, like make some money, from of our forecasting.
However, what about the unconscious “forecasting” that goes on behind the curtain of awareness? Such forecasting takes the form of assumptions that are regularly not challenged or even formally articulated.
I assume the car will work normally and without incident as I start up the engine.
I assume that I will continue to breathe normally today as I always have.
I assume that life will go on today as it has pretty much every other day.
Human beings like every other animal need to feel in control. We need to be prepared for emergencies, changes, and anything that is a threat. But here’s the problem, if we consciously worried about every aspect of our lives and whether it was going to be safe to get out of bed today, we wouldn’t. We couldn’t. At worst we would be obsessive-compulsive and stationary, unable to move.
However, this doesn’t mean we aren’t continually assessing the environment and world around us, predicting the next few seconds or beyond. It’s just that it happens unconsciously and only appears in awareness once a line has been crossed and the alarms are readied if not activated. In Intuality we call these events, “actionable alerts”.
Prediction is something that is happening all the time even if we’re unaware of it.
Who would have guessed it?
|
by Howard Rankin PhD, psychology and cognitive neuroscience
|
|
|
Are we the “Humanized AI” we have been looking for?
|
Everyone is aware that our future is simply a set of probabilities coming to pass. Yet no one can be 100% certain of any of those probabilities, including the other extreme of zero percent. Human beings have to deal with this every day. How we deal with uncertainty mentally and psychologically, to gain the confidence to take action nevertheless, describes our lives. Without probabilistic hope, there will be no future at all.
|
|
|
|
The choices we make in spite of our uncertainties make humans active players in the world's events, determining and re-determining how we shape and are shaped by those events. We do not stop acting, but rather make judgments about the probabilities of our success despite the odds. Sometimes the odds stimulate our intuition in faith, often we go cold in fear, and many times we need some help in processing the information.
What if an AI could help us with increasing and maximizing such probabilities? Could we not believe and trust that an AI is a superior calculating machine, vastly more capable than we of handling the world’s massive data? Combined with some training as to what is truly important to human beings and comfortable level of data for human beings to decide, would collaboration not be the most desirable combination?
We are beginning to be able to access computer assistance to help make a better world happen. We strive to shape the world the way we would like, protecting ourselves with some layers of deep analysis that might keep us from error. To take action, we must have some level of confidence in the outcome. We may be forced into action, but then we will make the decision that maximizes the outcome. Even the sub-optimal can be maximized.
Why does any human being ever gamble on anything? We know humans do gamble willingly, often excessively as the odds make us manic. But humans are also resigned to the facts of life: often life is a gamble, in foresight or hindsight. Is an action taken out of desperation, or out of some sense of faith and confidence and trust that something good will happen? Is it out of calculation of probabilities? It makes sense to maximize our chances.
Why would any human being ever trust a black box? The black box would need to have a credibility track record for predicting future probabilities. How would a machine achieve such a track record? By being of value the human being in some sort of decision process that creates something monetizable or valuable. Over time, such value creation or value contribution would elicit human trust and reliance, but that trust would have to be continuously refreshed and re-earned.
Most AI’s do not predict the future, because they are not programmed to be good at that. They are used mostly to interpret Big Data and characterize the past in some fashion that allows massive processing of massive amounts of information where human beings are comparatively weak. Human beings need to deal with excessive levels of complexity all the time, but have learned to apply their own intuition and other principles of intuitive data processing, without which humans would be helpless and never make decisions. What if machines were trained in areas humans are comparatively strong? Training humans to be better information users, while training machines to be more intuitive predictors, is the most promising way to make productive progress.
|
by Michael Hentschel, anthropologist, economist, venture capitalist
|
|
|
“Do you believe that the acrobat will successfully walk across the wire again?”
|
A tightrope walker balances on a high wire spanning Niagara Falls. As you watch his silhouette glide above the spray, you wonder how he can see where his next step will fall. He focuses only on balance and beam and is oblivious to the water’s steep descent, its deafening roar. He is confident and surefooted, and traverses the falls deftly, while guiding the two handles of a wheelbarrow. He crosses the falls while pushing it, not once, but five times!
Before beginning his sixth attempt, he pauses and waits while you watch, speechless. A local
|
|
|
|
TV reporter approaches and stops in front of you, with cameras following.
“Do you believe,” he probes, “that the acrobat will successfully walk across the wire again?” “Why, yes!” you emphatically reply, quite certain of his ability. Did you not just witness five crossings? “Then...get in!” says the reporter.
Compare the question, “Are you confident?” to: “Are you confident enough?” The first involves knowledge (of past performance). The second requires trust (in what will happen next).
Therein is the meaning of humanized AI. Although prediction-making includes intuitive signals and rational deliberation, humanized AI acknowledges the preponderance of evidence that suggests intuitive signals, or emotions, play a key role in the prediction making process, larger than has been understood, until now.
“Recently, the careful, deliberate, data-driven analytical approach is no longer seen as a stand-alone activity in prediction making.” (Twyman, Harvey, Harries, 2006) Science has found that we are social people who constantly interact with each other emotionally. While we are making plans and deciding what to do next, the link between our emotions and intuitive signals is activated. Jumping into the wheelbarrow that will traverse a high wire... well, that requires trust. Observation asks for mere belief.
Every prediction carries with it an element of risk, and as researchers gained a better understanding of the emotion/intuitive process in prediction making, they called it “the heuristics-and-biases approach.” (Kahneman, Slovic, & Tversky, 1982) Twenty years later, they dropped the cumbersome term and began to refer to these emerging prediction making processes as intuition. (Gilovich, Griffin , & Kahneman, 2002).
Humanized AI developer, Grant Renier states, “The simulation of human intuition, humanized AI, competes very effectively with the traditional process of historical analysis. If humanized AI can be as effective in its prediction making results as the traditional, analytical method, its benefits are huge in saving energy, time, and costs. We no longer have to ‘carry a super computer on our back’ in a metaphorical way."
Simulated intuition can open up a world that is a quantum leap ahead of where we stand today. Its predictions can be wrong, and at times humanized AI demonstrates that it can be better, but no less affective, than our conventional methods that are so deeply engrained in us through culture and education. This is the breakthrough!
|
by Marguerite Beyer, this week's invited contributor
|
|
|
This content is not for publication
©Intuality Inc 2022-2024 ALL RIGHTS RESERVED
|
|
|
|