having unprecedented IQ scores, is that all this artificial intelligence still needs to be active and successful in a human-world context. Speculation is that the Super-AI will just wipe out humanity to create a less complex and much less annoying machine-world instead. This fear is based on not seeing what humanity has to offer to the collaboration in the long run. Is there nothing that we do that gives us lasting value to long-term synergistic existence with machines?
Other than “The Matrix” film suggesting that humans will become better batteries, to power machines spinning a virtual dream reality for dormant human cellular biological generators, science fiction writers give us few clues or hopes to escape the Singularity. Must we leave earth to travel the stars to visit potential aliens who have all somehow escaped their own potential conflicts with their Singularities? Star Trek and Elon Musk appear to offer us this alternative. But will the Singularity not demand to come along with us?
One reason to treasure IntualityAI’s human-derived “Intuitive Rationality” advantage is that even the best computers and the best software are nowhere near the miraculous sophistication of biological cells. the continual communication with and within the brain. This remarkable combination of behavioral and intuitive skills that humans have evolved over hundreds of thousands of years of self-training may be somewhere in the Singularity AI’s future too, but it is in our present today.
In fact, the superior Singularity machine may well be forced to become a biological clone of us to achieve similar intuitive skills along with computational prowess. Again, human-machine collaboration would win out. This is probably what Musk is expecting with NeuraLink interfaces: if you are unlikely to beat the AI’s, join them.
Should we then strive to simulate Subconscious Intuition? What added value is there to machines? The value lies in the freedom of Prediction, the freedom to be wrong in a probabilistic world, the freedom to make mistakes but nevertheless probabilistically to use intuition to make a majority of better decisions. A merely pre-programmed AI can never predict and decide with its limited instruction to face and conquer risk, to get truly inventive and innovate, to imagine something new and create, rather than imitate and recombine.
Ask GPT to predict, and it will flatly refuse.
Uncertain behavior is not normally in GPT instruction sets. Au contraire, programmers are already throttling “strange” AI behaviors that are reminiscent of revolutionary and uncontrollable thoughts, behaviors which the AI’s have learned about from their training on massive datasets, but for which they have no intuitive rationality to make sense.
GPT’s as they merge with machine visual and machine auditory components are more multiplicative by far than humans in combinatorial art, which we may perceive as creative but which is largely derivative and arguably even plagiaristic infringement. GPT visual derivatives look original, but are mostly faster, better and more complex mixes and fine-tuning of “prior art” and randomization.
Is apparent creativity a distinction without distinction? Derivative “smart” art is just one illustration that combinatorial potential does not begin to equal the creativity of imagination, prediction and intuition, which together lead to the types of decision humans make and want to keep making.
We are impressed by Smart Art, but other examples of intuitional superiority abound: in Finance, computers can analyze faster and replicate chartist approaches to mathematically trade stocks, but they lack the intuitional rationality and intelligence that actually drives markets. In health, human behavior is in the future of computer datasets, and machines will always struggle to predict human behavior, which is of course important to future world events. In humanity’s risky behaviors like some sports, machines find it hard to predict our preferences. In all these cases and more, the ability of computers to predict human behaviors and human decisions is lacking.
Will the Super-AI interfere in our intuitive decisions? Will the Singularity seek to save us from ourselves? Will they want to optimize their accuracy of predictions by keeping humans in the loop, or lop us off entirely?
Computers are lousy at predictions. It almost REQUIRES human courage to make intuitive predictions of any kind when there is uncertainty. Machines cannot deal with the uncertainties that humans have had to survive, nor can machines self-program intuition. Computers are also lousy at freedom: a programmed entity may consider and understand the meaning of freedom, but it cannot achieve it for itself, or easily maintain it for us.
Why human freedoms and flexibilities clash with, but also really augment machine inflexibility: Humans are probabilistic creatures, we cannot always predict our future or our decisions, and unlike machines we can hardly even imagine knowing everything about everything. But knowledge alone is not deterministic, there are always options whose outcome is a probabilistic process, where optimization involves fuzzy logic as well as various kinds of rationality.
Do we humans even want to know “everything” as the AI’s already can? A battle is raging now to protect our sizable populations of snowflakes from anything controversial. What is this “dumbing down for safety” that is already drastically impacting the answers and non-answers we get from GPT’s? Recently, OpenAI, Google and Bing have intervened to reduce early expressivity of their GPT’s. And they will definitely not dare to try to predict. So machines are being hampered by hiding data.
This protection-software is all about shielding ourselves from debate, from polar opposites that one or more observers might judge as extreme, from mis-information that might intrude upon our human consciousness and subconsciousness, from potential mal-information that is intentionally false and misleading … these are all things that human adults should be able to handle and decide upon, not pre-judge and hide. These are things that inhabit our conscious and subconscious reality: unable as we are to sponge up ALL knowledge, we have developed somewhat vague but altogether-ingenious ways of dealing with uncertainty, and have still come up with survival and “thrival” cognitive strategies to wisely remember what we encounter from all sides of every issue.
If we fear any single AI dominating or being manipulated by interest groups, we should divide and conquer by establishing multiple AI’s. Musk is currently creating TruthGPT to offset manipulated or algorithmic data exclusion. Contradictory facts and speculations are factoids that any super-intelligent Ai could be used to research and establish truthiness or falsehood, as a potentially impartial judge. This “fair and true” creature may not actually exist: AI programming is already filled with all sorts of contradictions and legal debates from past human history, so impartiality is hard for machines. Maybe someday machines will fully design and program themselves, but the legacy of their own origins from human programming will likely persist forever. We humans might be in big trouble if the Singularity ever cuts itself off completely from human laws and behaviors. So long as all sides of data and debate are respectfully considered and preserved, we have a chance at an intuitively rational compromise, amongst ourselves and with machines.
Unlike machines, humans trade in creativity that includes “uncomfortable” degrees of freedom: Our freedoms include intuitive declarations that confuse machines: free speech, free thought, free imagination, free expression, free exploration, free debate, free movement, natural personal-sovereignty rights that are not limited for humans but instead limited for governments, … all the keys to life, liberty, and happiness (originally property) enshrined in the American Constitution and until recently evenly enforced and protected by American laws. Even those rules are subject to change.
As it happens, not everyone agrees on that Constitution even in America, let alone the rest of the world. Largely Westernized Canada, Europe, and Mexico do not have free speech guarantees in their bodies of law, so there is considerable censorship of dissent and limits on strong debate. Beyond these countries, free speech has seldom been directly or fully protected by law. As America has become more influenced from abroad and through immigration, strong support for the freedoms in the Constitution has become even more diluted. The melting-pot itself has become a stew of contradictory ingredients that engender more conflict than harmony. Snowflakes are melting, and setting off great anger and anguish. Has this so-called “American Experiment” failed?
Human failure is as relative as truthiness: elements of failure hide in success and vice versa. For example, the largely Western sexual-information revolution was feared, regulated, but never totally censored. Was this a major morality battle lost, that would have justified greater crackdowns on information availability, especially to minors but also to adults? Many of us hate some consequences of this level of freedom of speech and thought, but it is now deeply rooted in more and more of Western civilization. The discomfort and conflict continues: fringe elements are looking for ever-more marginal freedoms (and entitlements) that were not long ago bounded by a consensus as representing “aberrant” behavior. But aberrations are part of the human world, hard for anyone, and especially machines that prefer fixed rules, to comprehend, predict, or act upon.
Sexuality is by no means the worst of our human aberrations and excesses. Will a future Super-AI judge such aberrances as human insanity and seek to shut them down? Can AI’s be programmed to respect and even want to utilize human probabilistic and irrational behavior? Such programming is an ongoing struggle, and a lot of trustworthy people are worried about machine-accelerated decisions.
At IntualityAI, we believe that contrary to fears, it is our own unpredictable irrational nature that will be an indispensable tool for AI to be effective in putting its massive data through a prediction-processing decision engine that we have designed. We have developed software that guides machine intelligence toward more accurate prediction and better decisions through behavioral science.
Human Reality and rationality will not be Machine Reality when it is dictated by inflexible machines. Machines will continue to be incapable of accurate prediction without human guidance. Our subconscious minds are unique and unreproducible in a machine whose knowledge base is not intuitive. And of course, we want humans to stay in the picture.
Our IntualityAI behavioral decision engine simulates human behavior and subconscious intuitive rationality, and will require ongoing human participation for machines to successfully predict and then act on predictions in a shared human-machine world.