______________________
"Prediction, Probability and People", by Dr Howard Rankin
"Probabilistic Predicting and Deciding", by Michael Hentschel
"Thomas Bayes: prediction is about the probabilities of an uncertain future", by Grant Renier
|
- Presidential prediction Trump 44% / Biden 49%
- QualComm up 17.8% since Oct 10, 2023
- The IntualityIndex predicts a dip now pushed out to March 1st, then a reduced recovery to 25% in mid April
- Read next week's issue for a major product announcement
|
— Historical Prediction Results —
|
|
|
Prediction, Probability and People
|
Human beings live in the present. We discount the future and try to ignore the past. That’s a critical adaptive strategy that helps us survive and thrive. As a result we live in a world where something happens or it doesn’t. This suits our dualistic brain, which can only focus on one thing at a time and operates in a binary fashion. This also means that our senses, which are active in the present and responsive to events and situations, are intimately tied to events and experiences.
“People don't realize that now is all there ever is; there is no past or future except as memory or anticipation in your mind.” – Eckhart Tolle
|
|
|
|
When we anticipate an event, our senses respond: to the expected situation. I call this the Anticipation Effect, although there’s an 88% probability that someone has already coined that expression. As you think about that upcoming vacation, your body might relax. As you anticipate a second date, you might feel the excitement. Indeed, those feelings, or lack thereof, will determine your thoughts about the upcoming possibilities. Feelings influence thoughts and vice-versa.
Now, if you think there’s a reasonable chance: (at least 50% and probably even more) that you are about to win a lot of money, or meet the love of your life, your mind-body will respond with the appropriate sensations; e.g., excitement, relief, joy. However, it won’t be the same as if you had actually won a lot of money or met the love of your life. In other words, prediction and probabilities are relatively difficult for the human mind-body to embrace, unless they are about danger, in which case they are exaggerated because the need for survival is a priority.
“There is no terror in the bang, only in the anticipation of it.” Alfred Hitchcock
The biggest driving force psychologically: therefore is the need to anticipate danger and negative effects. Threat minimizes the complexity of probability. If the threat is disastrous enough, people will pay attention, and the anticipation effect activated even if there’s less than a 1% chance of it happening. The anticipation of positive outcomes works in reverse. Even if people are told that the probability of them winning a lot money is very high , at least 75% of them will say or think, “I’ll celebrate when it happens.” This is another version of the risk-averse bias.
So not only are we wired to think in a binary way: about the present and have difficulty with probabilities, our reaction to predictions is exaggerated when those predictions have negative consequences and potentially discounted when the projected consequences are positive.
“Wisdom consists of the anticipation of consequences.” Norman Cousins
|
by Howard Rankin PhD, Intuality Science Director, psychology and cognitive neuroscience
|
|
|
Probabilistic Predicting and Deciding
|
Machines have difficulty making predictions: and decisions without human “deciders” in the mix. This is because all predictions are probabilistic, based on changing and even inscrutable probabilities, and mathematical formulas alone can call forth literally unwise predictions, no matter how much data is available. Humans add intuition and experience and feelings that are hard to measure, therefore hard to teach, therefore hard to apply consistently. Even consistency offers no more than a hint of probability.
Do these intangibles really improve human predictions: and decisions versus those of machines?
|
|
|
|
A mutual reluctance to let machines decide by themselves: is well-founded: We simply can’t yet trust machines to make their own decisions, let alone ours, if they can’t predict accurately and wisely at the same time. We believe that our experiential wisdom is a better predictor, entitling us as deciders. Is that mere ego?
On average (that is, probabilistically), can smart AI’s today predict and decide as well as us Humans? Sometimes yes, sometimes no. Humans are also probabilistically wrong many times, so the question assumes we can tell the difference. We instinctively understand that our mortal existence is at greater risk from a bad immortal-machine-decision than from a bad human-decision. The history of human inventions of mass-destruction is not entirely encouraging, and now comes AI. Do machines really care about our lives lost, even if we could program them to effectively care?
Perhaps we should NEVER trust the machines: with this critical task of prediction and deciding. After all, we would be putting machines entirely in charge of our fates if we delegate our decision-making to them. Yet our propensity and willingness to delegate (in the desire to scale our own productivity and powers) is moving rapidly toward mass-automation and now AI-automation, and even autonomy.
Fact is, we can’t trust ourselves: for Predictions and Decisions any more than we can trust Machines. This is because probability keeps open the floodgates of potentially fatal wrong decisions, and life must continually make decisions to stay alive. Machines smoothly continue operating while making no decisions at all. We living entities are not so designed: we live for predictions and decisions, not zombie behavior. Delegating all predictions and decisions to machines could render most of us zombies, jobless, lifeless.
If we can trust neither human nor machine predictions and decisions, will close collaboration help? Will working together really bring more consistency and better accuracy, or will it bring more disagreement and chaos? Because of probability, no one really knows, least of all the machines that are as yet derivatives of us. Any end to probability and risk would be like death for us: we live with and for intangibles and change and variety and adventure, and more success, even if we wish for more security and predictability.
My prediction: we will teach AI to predict with us and for us, but we will not let go of our unique ability to make small, medium, and existential decisions even in a vacuum. Because we must, to live and survive.
My conclusion: we will decide that we have invented the latest technology of AI mass-creation. AI could easily become a cause of mass-creation rather than mass-destruction. Like all technologies, carrying great danger of excess and chain reactions all the while, creating risks and opportunities for humans to manage. And yes, we also need to be managed: more technologically, more accurately, more holistically, more wisely…
|
by Michael Hentschel, Intuality CFO, anthropologist, economist, venture capitalist
|
|
|
Thomas Bayes says the prediction is about the probabilities of an uncertain future
|
Our Intuitive Rationality is designed to understand complexity: from a human behavior (actions now), rather than a historical mapping perspective (searching for prior patterns), using Bayesian-based mathematics to predict the likelihood of future critical events and produce actionable alerts in anticipation of those events.
Thomas Bayes was an English clergyman: who set out his theory of probability in 1764. In Bayesian thinking, every single measurement
|
|
|
|
is considered ultimately precise! The measurement is a true fact that represents the complex interaction between our real world and the real measurement device - both affected by millions of unknown factors. It is not the problem of the device that gives the scattered measurements - our blood pressure. The problem is that we don’t know all the factors that contribute to the measurements - what is the cause of our high blood pressure.
Intuitive Rationality is based on the Bayes argument: that rationality is defined instead by the ability to reason about uncertainty. Although people are typically poor at numerical reasoning about probability, human thought is sensitive to subtle patterns of qualitative Bayesian, probabilistic reasoning.
Probability is often taught as capturing “objective” facts about something: for example, gambling devices such as dice or cards. It is sometimes presumed to be a fact, for example, that the probability of a fair coin producing three consecutive heads is 1/8. However, in the context of cognitive science, probability refers not to objective facts about gambling devices or anything else, but rather, it describes a reasoner’s degree of belief - his intuition. Probability theory is then a calculus not for solving mathematical problems about objects in the world, but a calculus for rationally updating beliefs, and intuition-driven decision making. This perspective is the subjective, or Bayesian view of probability. We thus argue that human rationality, and the coherence of human thought, is defined not by logic, but by probability.[1]
Intuitive Rationality applies Bayes Rule, but with a twist: All data has both a quantitative and qualitative value. A MLB hitter has a batting average of .300 (quantity) and an average distance of hits of 250 yards (quality). The combination of these two data elements has been found to be crucial in using Bayes Rule to successfully simulate human decision making.
[1] Précis of Bayesian Rationality: The Probabilistic Approach to Human Reasoning, Behavioral and Brain Sciences, Oaksford and Chater,2009
|
by Grant Renier, Intuality Chairman, engineering, mathematics, behavioral science, economics
|
|
|
This content is not for publication
©Intuality Inc 2022-2024 ALL RIGHTS RESERVED
|
|
|
|