Decision-making takes us from predictions to actions
______________________
Dr Howard Rankin write about "Predictions, Perception and Predisposition"
Michael Hentschel explains "Teaching and humanizing AI how to predict effectively"
Grant Renier states "We accept these odds as success."
|
|
|
TRADING PERFORMANCE RESULTS OF THE INTUALITYAI SYSTEM HAVE MANY INHERENT LIMITATIONS. NO REPRESENTATION IS BEING MADE THAT ANY ACCOUNT WILL OR IS LIKELY TO ACHIEVE PROFITS OR LOSSES SIMILAR TO THOSE SHOWN. IN FACT, THERE ARE FREQUENTLY SHARP DIFFERENCES BETWEEN REPORTED PERFORMANCE RESULTS AND RESULTS SUBSEQUENTLY ACHIEVED BY THE SYSTEM OR PORTFOLIO. THERE ARE NUMEROUS OTHER FACTORS RELATED TO THE MARKETS IN GENERAL OR TO THE IMPLEMENTATION OF THE SYSTEM WHICH CANNOT BE FULLY ACCOUNTED FOR IN THE PREPARATION OF REPORTED PERFORMANCE RESULTS AND ALL OF WHICH CAN ADVERSELY AFFECT ACTUAL TRADING RESULTS.
|
|
|
Predictions, Perception and Predisposition
|
I live in Coastal South Carolina, which abuts the Atlantic Ocean. Between July and early November we get regular predictions about tropical storms and developing hurricanes. In over thirty years living here, we have had to evacuate for impending storms half a dozen times. The worst damage was inflicted by hurricane Matthew in 2016, when 125,000 trees were lost and several members of my neighborhood sustained significant property damage.
|
|
|
|
Weather predictions have become increasingly accurate over the years with a 3-day forecast now having good reliability. Obviously the further into the future, the less reliable the forecast becomes. What’s interesting is that various predictions generally have good agreement. However, the reactions to those predictions vary enormously from individual to individual.
At the extreme ends of the spectrum are those who are constantly prepared for evacuation, constantly watching the Weather Channel 24/7 and at the merest hint of trouble are on the road to a safer location. On the other hand, there are those who will never leave, even in the face of a Cat 5. These folks are highly skeptical of the warnings and don’t even know who Jim Cantore is.
My point is that any prediction, in whatever form, is filtered through the experience and perception of the observer. And what significantly shapes that perception is the feedback they have got from their own decisions and predictions about the predictions. For example, I know one guy who definitely fell into the cynical category and was determined to ride out any storm. He stayed through a major hurricane – and almost died. Now, he evacuates when told do so.
Similarly, there are many long term residents who have left, got stuck in horrendous traffic jams, spent days and lots of money, only to see the storm be nothing more than a typical Fall day in London. We have learnt to wait as long as possible, thus getting the most reliable forecasts, before making a decision.
The value of predictions is thus highly dependent on what observers of such forecasts actually think about the prognostications and such thought is highly influenced by many personal factors and specifically the feedback they got from their previous decisions. Indeed, this is how IntualityAI operates. It processes data, predicts outcomes, creates actionable alerts and then adjusts them based on incoming updated data, feedback on previous predictions and actions that constantly adapts to new circumstances.
|
by Howard Rankin PhD, Science Director, psychology and cognitive neuroscience
|
|
|
Teaching and humanizing AI how to predict effectively
|
The criticality of predictions to decisions and getting anything done is anathema to computers that expect mathematically accurate determinism.
There is no action that we take that is not based on some kind of conclusion (a prediction) about the impacts of the decision to act. Yes, a lot of people's actions are thought to be thoughtless, and certainly many actions are simply stupid. But the actions are in some way premeditated, however little meditation has actually gone on. We can assume that people make rational or irrational decisions based on some input that gives them the chance to react to some actionable alert.
|
|
|
|
In health for example, an actionable alert can be a shock of pain with an involuntary response, or it can be a piece of information like a health warning based on sensors, which sometimes gets voluntarily ignored. The difference is not that the pain or the warning were not predictions, but in the amount of time the person has to make a decision to react. In either case, the person is forced to make a decision, and has received an actionable alert.
Every action we take has a decisiveness about it. Making no decision is still a decision. The real question is whether good data will result in good decisions, and there we have the beginnings of the need for a good prediction. Data alone do may or may not provide a good prediction, so may not be much help in acting on a decision with any confidence that the outcome will be good.
That confidence in turn is a probability, because not everything is known about the quality and consequences of the data, of the decision, or even of the actionable alert. That uncertainty is the essence of why human beings live in a world of probabilities, and have to continually adjust to trying to maximize existence in a changing environment. Real time data, real time thoughts about consequences, real time reflection on experience and real-time emotions play large roles in what decisions are made on what predictions can be instantly made and optimized with action.
AI does not do well with prediction. AI can be trained to do probabilistic analysis, and similarly can be trained to do past data analysis to look for repeating patterns. But oddly enough, AI does not have the courage to make human decisions. Part of that is accountability. Can we blame an AI for making a decision, when we are the ones having enabled the AI to decide without our input? Should anyone give AI that much leeway anyway? Almost all decisions have a probabilistic negative component.
If an AI is told to do no harm, the AI will dutifully choose to do nothing. That is not cowardice, but basic programming and definition of no. If an AI is told to do minimal harm, the definition of minimal requires a prediction and a ranking of outcomes, and whoever tells the AI this, will bear accountability but has lost any ability to control the AI Black Box analysis. This does not sound like a bearable proposition.
If we expect AI to truly make intelligent and useful contribution to decision analysis, we must teach it how to predict the way we do, teach it how to document its thinking so that black box analysis does not lead to dreadful decisions made by machines in our name or in their own name.
IntualityAI teaches (informs) AI how humans make more successful decisions. As we have written about elsewhere, human rationality AND irrationality are to be expected to be involved in anything that goes on in our world. Not only do humans NOT wish to eliminate all irrationality, they use what seems to be irrational behavior intuitively, to adjust for the vagaries of real uncontrollable probabilities of events, adjust for other peoples' behaviors, and adjust for the fact that not everything that has probabilities attached is precisely predictable. Adjustments are made to make our decisions conform to our intuitions, our speculations, our willingness to gamble, our fears and desires, our emotions when facing uncertainties. As we always have, surviving the challenges we face by doing our best to predict, decide, and act.
Is mathematical determinism what we want? Will future collaboration between AI and Humans work? Will AI "assistance" allow us to keep our rationality and irrationality, and will the AI possibly respect and value the irrationality for its ability to make and evaluate predictions that (on balance) benefit humanity's survival and permit us to thrive?
|
by Michael Hentschel, anthropologist, economist, venture capitalist
|
|
|
We accept these odds as success
|
It would be great if our world was totally rational. Mathematics would rule our lives. But our squirrelly individual behaviors, driven by our need to be unique and noticed, say No to determinism and Yes to "You can't tell me what to think!" Mathematics races to be relevant, thus the explosion of probability theories where nothing is really 'known'. It's all about percentages, best guesses and the battle between the owners of 'facts'.
So if AI is going to be really intelligent, as in 'artificially intelligent', we must simulate the functions of human prediction, decision-making, action and feedback, where +95% of it is subconscious - intuitive.
This fancy chart summaries how IntualityAI number crunches incoming data, closely mimicking the yet unequalled speed of our brains. It merges, combines, forgets and relearns. It makes many mistakes.
|
|
|
|
It's only 58% right about football game, 53% about the markets, 80% about elections. But then Seth Curry shoots less than 50% 3-pointers, George Soros shorted the British Pound for $1 billion, Biden won by 51.3%. I've never caused an accident in the millions of miles I've driven (been hit plenty of times). We accept these odds as success. Would we accept an AI that does this well?
We have always been uncomfortable with and suspicions of the physics of our world. IntualityAI is trying to bridge the gap. The current state of AI has a long way to go.
|
by Grant Renier, engineering, mathematics, behavioral science, economics
|
|
|
This content is not for publication
©Intuality Inc 2022-2024 ALL RIGHTS RESERVED
|
|
|
|