______________________
"The Main Threat of AI", by Dr Howard Rankin
"The AI Threat of Decision-making without Human Representation”, by Michael Hentschel
"We must understand the dynamics of our behavior to understand the true threat of AI!", by Grant Renier
|
- Download our new Predictions App for Android smartphones
- Presidential prediction are Biden 49% / Trump 44%
- Adode up 66% since May 19, 2023
- The IntualityIndex predicts a slowing recovery after the dip in late April
|
— Historical Prediction Results —
|
|
|
In a recent article, researchers identified: and evaluated the biggest threats of AI. After evaluating each threat, they then divided them into tiers, reflecting the likeliest and most dangerous hazards.
These were the top six:
|
- Audio/video impersonation
- Driverless vehicles as weapons
- Tailored phishing
- Disrupting AI-controlled systems
- Large-scale blackmail
- AI-authored fake news
|
|
|
|
What do they all have in common? They destroy trust. Yes, even the driverless weaponized vehicle does that. What would happen if there were just a few stories of such an event in the media? People would start to become very scared of all the cars on the road, especially if they could easily be identified as driverless. Threat would dominate.
In a recent Psychology Today piece I wrote: The Psychological Impact of Artificial Intelligence, I made the point that violence and brutality are in many ways outdated and primitive methods of coercion as Vladimir Putin for one, has hopefully learned in the recent past. When you are attacked, you attack back. When you are in danger, you fight. Now, there’s a much better and far more effective way of controlling people. It’s called manipulation.
Instead of forcing someone to your point of view: how about getting them to endorse and embrace it? There’s much more power and far fewer, if any, retaliatory threats than that.
People need trust to build their perception of the world.
As Eric Uslaner wrote: in his book Moral Foundations of Trust…
“Trust is the chicken soup of social life. It brings us all sorts of good things, from a willingness to get involved in our communities to higher rates of economic growth, and, ultimately, to satisfaction with government performance, to making daily life more pleasant.”
The undermining of trust through fake news: false identities, and manipulation, is a disaster for humanity. It breeds suspicion, paranoia and even an unwillingness to even accept that there’s something called “truth”.
The brain is wired in such a way: that defensiveness shuts down social engagement. It’s too risky to reach out, so we become more isolated and attached to our ingroup and suspicious of anyone outside it. Fake news and manipulation are poisons that erode our belief in objective reality, leading us to develop highly biased interpretations that makes us feel more comforted in the face of threats. Could there be anything more damaging to humanity?
If you don’t know what to believe: everything is a potential lie.
The development of mistrust destroys not just moral values: but the behaviors that are needed for humanity to survive: connection, understanding, gratitude, empathy, compassion, and love.
Fake information is the atom bomb: dropped on the soul of humanity and it is already falling on us.
|
References
Caldwell, M., Andrews, J.T.A., Tanay, T. et al. AI-enabled future crime. Crime Sci 9, 14 (2020). https://doi.org/10.1186/s40163-020-00123-8
Eric Uslaner. The Moral Foundations of Trust. (2002). Cambridge University Press.
|
by Howard Rankin PhD, Intuality Science Director, psychology and cognitive neuroscience
|
|
|
The AI Threat of “Decision-making without Human Representation”
|
When I immigrated as a 10-year-old: into the United States I officially became a legal resident Alien. This is a role I have chosen to continue, as I simultaneously carry the passport of another Strange Land called Germany. Whether being an Alien has rendered me more objective or more subjective is unclear, but I am keenly interested and aware and respectful of the structural development of America, having gone through comparatively deep Civics Education, including occasional wonderment.
|
|
|
|
With anthropological/archaeological training: I believe this system is theoretically the best that Humanity has yet devised: a system flawed as humans are, but incorporating many of the most important historical lessons.
I’m thus inspired to write about a perceived threat: that AI poses that is analogous to one of the foundational precursors to this country’s Constitution, which was said to be “taxation without representation”.
In an AI context, I would call the imminent threat “decisions without representation”: this is the increasing delegation of decision-making by humans to autonomous Black-Box machine algorithms.
Our technology is directly involved in algorithmically humanizing AI: and dedicated to development of decision making algorithms that are fundamentally human influenced, humanly intuitive, and intuitively rational. So we are deliberately trying to increase human awareness and human representation within AI. Even with that sensitivity, human representation cannot just be some average algorithmic adjustment.
Future machine decisions will carry consequential dangers: over which humans need to retain influence. Personally, I would feel threatened by a decision system delegated to an AI that benevolently or not so benevolently decides for me without my having any influence in the outcome. This is what the early American taxpayers protested against. Arguably, the severity of taxation on tea pales in comparison to the seriousness of life decisions made in the absence of human oversight and accountability.
Add to this the increasing delegation of decision making: in the context of probabilistic uncertainty, and we can understand legitimate fear of autonomous machine decisions on our behalf that could carry substantial damage without some final review process involving specifically human agents before final execution. And once executed, who’s to blame and what are the remedies?
In the final analysis, Humans are in charge of programming: the technology that enables and multiplies our effectiveness and our resulting productivity. For example, voting machines achieving erroneous results are imaginable only via corrupted human programming, accidental or intentional, and for the moment the final decisions are still left to humans to sort out. But are we already close to not being able to sort it all out?
Managing Civics or anything else without human beings as deciders: will become a momentously dangerous endeavor. In the long run, Humans must somehow maintain their vote in each critical decision. Can AI be created with that in mind, or will the black boxes of the future autonomously and dictatorially decide? Will black boxes be auditable to ever identify bad programming or bad process or bad decisions?
I would want myself or someone like myself to have representation in most final decisions. Not a panacea, but an approach to a solution. Not a guaranteed outcome, but an attempt to avoid the worst irreversible damage.
|
by Michael Hentschel, Intuality CFO, anthropologist, economist, venture capitalist
|
|
|
We must understand the dynamics of our behavior to understand the true threat of AI!
|
We humans are prone to thinking in a single dimension: A plus B equals C. That works for a math problem. Its solution is correct if time does not change between the statement - A plus B - and the solution - equals C. But in our real world where time flows forward without stopping, A and B have changed by the time we calculate C. So, C is now history. There's a new C to replace it and to content with.
|
|
|
|
How are the dynamics of this simple equation: related to the thread of AI? Well before attempting to answer that, I need to confess that I left out the final step. The change in A and B over time is in part due to the influential feedback by the previous C. But, the C that is being fed back picks up some new information on the way. So, we'll call it C+.
If I see a squirrel in the road (A): I turn the wheel to avoid him (B) and miss him (C). But if I don't make another steering correction (a new B), I'll end up in the ditch (no C+). The only way I can make another steering correction (a new B) is to feedback the result of the prior C as a C+, and, with a gasp of relief, drive on down the road. On the other hand, I could have not corrected my direction and crashed into the ditch. The incident was so overwhelming and the time to react was so short that the squirrel, in effect, determined my future! No C+.
So, maybe you see where I'm going with this. Let's label our squirrel, SquirrelAI, and the direction of the car, Us. Under what circumstances can SquirrelAI cause Us to 'crash'? Can Us react quick enough to avoid the 'accident'? Importantly, can the '+' in C+ contain enough information, quick enough to adjust Us and to 'miss' SquirrelAI?
In a prior issue, I wrote about the human information in all data: the "unique tapestry of human-defined causes and effects." That is the major element represented by the '+' in C+. It is engrained in all the data and information that we experience and influence. The Us understands it; the SquirrelAI does not.
Howard's "Fake information is the atom bomb" is threatening: to "fall on us" can be put in the context of the actual atom bomb threat, since the 1940s. I submit that humanity has so far managed that threat.
Michael's concern for "decisions without representation": makes the strong point that AI needs to be 'open sourced' through a democratic process that prevents SquirrelAI from Us crashing in the ditch.
|
by Grant Renier, Intuality Chairman, engineering, mathematics, behavioral science, economics
|
|
|
This content is not for publication
©Intuality Inc 2022-2024 ALL RIGHTS RESERVED
|
|
|
|