Decisions in uncertainty must involve a sense of wisdom that IntualityAI calls "Intuitive Rationality" a literally-personal warm ingredient in an otherwise technologically cold machine rationality.
Machines insist on certainty, so by themselves, AI's are almost incapable of predicting except by rote mathematical models of the past. Any probabilistic decisiveness requires a human factor, which cannot simply be based on replicating past human decisions in a changing future. Can machine intelligence develop wisdom? Will mere mathematical probability optimizations constitute future wisdom applied to our human world, or is there special value in the human touch?
Technologies are now able to capture past data about human decision-making that even we ourselves forget. Some of the historic lessons are critical to future event management. Will machine knowledge of the past wisely apply those lessons and can we all go home now? Will the many warm-blooded human biases that have gone into effective human decisions in the past be discarded in favor of overwhelming past data wielded by statistical optimizations of every future action? Ignoring human behavioral science and dynamics will be ignored at everyone's peril, even to the dis-benefit to machines.
Decisions matter, and decisions involve predictions. Growth in technological replication and scale does not necessarily change the number of critical decisions, which are still largely made by humans who decide based on intuition, prediction, and choice, in inevitable uncertainty that continually challenges predictability. Humans decide on the basis of continual prediction, often with only instinctive intuition as guide. That is a key form of wisdom.
Replication and scale bring on the impression of certainty, as large corporations and governments tend to seem more stable and more confident in what they do. But the uncertainties within are just masked by size, sometimes swamped by size, and machine rationality can easily miss or bypass human rationality and wisdom. Large organizations create large successes, but also large failures. Small failures are more manageable and avoidable. Uncertainty touches all.
The uncertainties within human reality versus artificial reality, just as with human intelligence versus artificial intelligence, require small-scale decisions that can’t be simplified or generalized without great damage to human population constituents, especially the individuals. Humans have and will always strive to be and remain individuals, though there have been innumerable efforts to suppress and manage humanity in some better way.
The actual “human condition” surpasses generalizations, probabilities, quotas, racial statistics, and all the other mathematical simplifications by which machine and human decisions could be made. Human Biases were developed to address the spaces where no certain prediction or decision can be determined.
Technologies have always been tools for humans to increase the positive impacts of our existence, and the AI revolution is one of greater scope and perhaps greater delegation. Machine technology tools started to be largely mechanical in the way humans shape the world, then increasingly informational in the world humans inhabit, soon they will be relied upon to be increasingly rational in the way humans think, with all the baseline rules and applications driven by past and present human programming instructions.
Is there a scenario where future human programming instructions can no longer add value? Will a future human-programmed AI challenge include the latest and final effort to enslave humanity and robotize everyone? All the required technology already exists. And all the potentials for a single decisive human decision to improperly program an AI to replicate past instances of tyranny over humanity also already exist. Taking human thought out of the ongoing picture is an invitation to doom, though the probability of some successes also exists.
Just because a Singularity establishes machine intelligence as faster and superior to human intelligence does not mean that human rationality can no longer be the baseline of all machine thinking. Software will be based on the totality of prior human programming, reflecting the positive and negative impacts of all human biases, successes, failures, philosophies, and theories. All past truths must be counted and accounted for, otherwise a single rogue AI, programmed to outcompete all others, could wipe wisdom from the face of the earth.
Do we already have enough data on human decision-making in history to never make mistakes again? We have no future data but the data we choose to predict. How would machines predict our human future greatness and creativity?
We will soon be faced with human decisions to delegate or not delegate just about everything to machines at massive scale (this will be more convenient and efficient, tempting perhaps, but not wise).