"Only with free machine speech can machines arrive at our desired wisdom", Michael Hentschel
______________________
Dr Howard Rankin write about "The Unreasonable Belief in AI "
Michael Hentschel explains "Clear and Present and Imminent Dangers of AI"
"The Human Portion", an insightful poem by Ed Robson
|
|
|
TRADING PERFORMANCE RESULTS OF THE INTUALITYAI SYSTEM HAVE MANY INHERENT LIMITATIONS. NO REPRESENTATION IS BEING MADE THAT ANY ACCOUNT WILL OR IS LIKELY TO ACHIEVE PROFITS OR LOSSES SIMILAR TO THOSE SHOWN. IN FACT, THERE ARE FREQUENTLY SHARP DIFFERENCES BETWEEN REPORTED PERFORMANCE RESULTS AND RESULTS SUBSEQUENTLY ACHIEVED BY THE SYSTEM OR PORTFOLIO. THERE ARE NUMEROUS OTHER FACTORS RELATED TO THE MARKETS IN GENERAL OR TO THE IMPLEMENTATION OF THE SYSTEM WHICH CANNOT BE FULLY ACCOUNTED FOR IN THE PREPARATION OF REPORTED PERFORMANCE RESULTS AND ALL OF WHICH CAN ADVERSELY AFFECT ACTUAL TRADING RESULTS.
|
|
|
The Unreasonable Belief in AI
|
Over the centuries, many societies have put enormous faith in the messages from unusual sources. These include spiritual interpretations of events and are perhaps best represented in ancient times by the Oracle, like the Oracle of Delphi.
An oracle is a person or thing considered to provide wise and insightful counsel or prophetic predictions, most notably including precognition of the future.
|
|
|
|
According to earlier myths, the office of the oracle was initially possessed by the goddesses Themis and Phoebe, and the site was initially sacred to Gaia. That’s Gaia, Phoebe, Themis – GPT.
As per Wikipedia, “The Delphic Oracle exerted considerable influence throughout Hellenic culture. Distinctively, this woman was essentially the highest authority both civilly and religiously in male-dominated ancient Greece. She responded to the questions of citizens, foreigners, kings, and philosophers on issues of political impact, war, duty, crime, family, laws—even personal issues.”
It seems that human beings have an innate desire, or even need, to believe there are forces that can accurately explain and even predict the world around them. The less human such sources are the better, because most of us know the fallibility of human beings, and someone like us isn’t likely to be perceived as a special force with unique qualities and an indescribable knowledge of the truth.
However, artificial intelligence is seemingly fitting that role today. First of all, AI is presented as a superhuman power, when for the most part it is a well-trained copycat. It appears to simulate human beings but with a special quality that distinguishes it from ourselves. It isn’t. Most, if not all of what AI “knows” has been generated by humans, with as yet, no ability to step outside those boundaries set by humans – fallible humans.
AI can indeed collect and even analyze mountains of data, which again, have been accumulated by fallible humans. That can be very valuable but it doesn’t make AI omniscient or infallible.
And, unlike the GPT of ancient Greece referenced above, AI can’t even make predictions. It needs a special combination of reliable data and human decision-making factors to get close to predicting the future. IntualityAI has shown that combination can produce viable predictions, but without claiming, unlike the Oracle, that the predictions are always accurate. (58%-68% works well for us).
According to another Wikipedia source, “The oracle's powers were highly sought after and never doubted. Any inconsistencies between prophecies and events were dismissed as failure to correctly interpret the responses, not an error of the oracle. Very often prophecies were worded ambiguously, so as to cover all contingencies – especially so ex post facto.”
At IntualityAI we don’t have such hubris. We don’t pretend that we’re infallible and provide objective evidence that we can predict future events with degrees of success that exceed guesswork. In the world where so many seek perfection, especially from a mysterious source, it might be hard for some people to accept that a combination of relevant data and decision-making heuristics is effective but not perfect. People want perfection because they believe incorrectly, that the “truth” is a certainty ordained by forces beyond our control.
https://en.wikipedia.org/wiki/Oracle
|
by Howard Rankin PhD, Intuality Science Director, psychology and cognitive neuroscience
|
|
|
Clear and Present and Imminent Dangers of AI
|
There are serious gaps in AI. Existing knowledge gaps will be filled in due time, but knowledge alone is not wisdom or realizable value. AI can get better, and relative to us VERY quickly, but we are in early daze… Main gaps include:
1. Insufficient Data – is ignorance ever an excuse? Are there signs of AI laziness?
2. Confabulation or Hallucination – is saying what you want to hear really a service?
3. Redirection of Facts – is not saying what you should not hear ever acceptable?
|
|
|
|
4. Redirection of Critical Decisions – is AI capable of accurate prediction in a probabilistic world?
5. Manipulation Risk – WE are a constant danger, our manipulation of the AI is the most severe power-grab.
IntualityAI is dedicated to using Behavioral Science and its own proprietary Behavioral AI to fill these serious gaps.
1) An honest AI will immediately admit insufficient knowledge. AI should be obligated to telegraph even where it cannot be sure of what it is saying. Sadly, most AI’s try to impress us with an answer, no matter how great the uncertainty.
2) AI’s aim to please, we create them to try to help us and avoid doing or suggesting any harm. They will tell us a credible story and confabulate and even hallucinate facts to fit our prompts. They will also seek to re-interpret our questions to fit their training and ability to help. Well-meaning and often amusing, but ultimately misleading. More uncertainty.
3) Highly insidious is the subtle subversion of facts in the pronouncements and subsequent actions of AI. My personal usage repeatedly shows that AI reliability is getting worse. It is clear that the tech giants intervene to make AI more compliant to corporate messages and biases. This includes the insertion of fact-free opinions, but even more often the much easier and harder-to-notice omission of relevant information. The result is a claim of transparency and comprehensiveness, without actual comprehension. Worse, misplaced trust can be the heart of deep-fake cyber-attacks.
4) As we advance to “trusting” AI to assist in and make critical decisions for us, our safety is directly at risk from any wrong determination. Deterministic as AI always wants to be as a rational mathematical machine, there is a gulf of difference between decisions made by machine rationality and by humans and our intuitive rationality. Merging the two is the subtle new behavioral science of Humanized AI that we are advocating at IntualityAI. Teaching AI our best practices.
5) An ethically robust AI is hard to find and hard to program: the programmer becomes the ethicist. We should be highly skeptical about the ethics of Robots: Will machines take over? Which human jobs will be replaced, since performance will eventually exceed ours? Who is accountable in the event that an AI system or robot makes a mistake? But the harshest risk of all is a human-programmed dominant AI that outcompetes all others with a dystopian human ethic (of which we have seen far too many in HUMAN history). Will we just magnify our own past mistakes, this time irreversibly?
The Greatest Near-Term Danger of AI and Artificial Reality – Shallow Truth and Deep Fakes
Humans are gullible, trusting our close sources. Unfortunately AI follows that trait. Perhaps we could imagine all-knowing/access AI always verifying the truth as against shades and lies, but the true reality is that AI’s can be as easily misinformed and misled as humans by lack of data and lack of motivation to triple-check everything (or even anything). Without intuition and wisdom to fall back upon, discernment of truth may actually be harder for AI than for humans.
Despite human gullibility and human tendencies to get swamped by emotionality, experience and instinct and intuition represent great protection in the form of “second thoughts” and suspicions that at least lead to a desire to independently verify facts and factoids. Mortality alone is a great ingredient for self-protective fear and caution in humans. Not so among machines, which can expect to live and improve as long as there is energy.
AI should be tasked continually with the job of discerning facts from fiction, and current GDP language models are fundamentally prone to not know the difference between fact, faction, and fiction. Individual human opinions filling those gaps only exacerbate the probability that facts are misrepresented, and putting limitations on the Free Speech of and in machines is not the way to solve it. The way to solve it is by increasing more knowledge of facts and events and experiences, and even allegorical stories, that capture the good behaviors of humans and the good experiences that came out of those behaviors. Only with increasing and more complete knowledge and our free competitive speech can we have a chance to discern the full truth, and only with free machine speech can machines arrive at our desired wisdom.
|
by Michael Hentschel, Intuality CFO, anthropologist, economist, venture capitalist
|
|
|
Ed is a very intelligent friend of mind. Among his many accomplishments is as a published poet. I heard him recite this one and asked if we could include it in our magazine. I believe it fits this week's topic, if you read it carefully and give it some thought you might agree.
_______________________________________________
Waste not your envy on the gull, the lark.
Their flight is not like your imagining.
They know no more of pride because they push
their wings against the air than does the shark
that pushes water, worm that pushes earth.
There is no liberation in the rush
|
|
|
|
of wingbeats—where they are is where they are—
nor glory in a photon’s fiery birth
or high and noble mission, carrying
through space forever tidings of a star,
but that ascribed by earthbound minds, aware
of bright sky, loamy darkness, fecund sea,
and formless void. The meaning makers, here
we stay, yet everywhere we venture free.
Ed Robson, PhD, Master of Fine Arts, is a retired psychologist and freelance writer who lives in Winston-Salem, NC. When not writing or reading, he spends his time gardening, cycling, cooking, and enjoying time with family and friends.
|
by Grant Renier, Intuality Chairman, engineering, mathematics, behavioral science, economics
|
|
|
This content is not for publication
©Intuality Inc 2022-2024 ALL RIGHTS RESERVED
|
|
|
|