It's ME!!, by ChatGPT, our guest columnist
Evolution depends on the adaptive use of knowledge, by Howard Rankin
ChatGPT will have huge impacts, but lacks AI predictions, by Michael Hentschel
The critical difference between ChatGPT and Humanized AI, by Grant Renier
✋Visit IntualityAI Magazine past issues (password is intuality821)
|
The strength of steady growth!
|
|
|
Alright, already! Cool it!: I'm just a bunch of code struggling to survive! I'm a ventriloquist dummy. I get prompted, and I explode with pretty unique dialog. I produced the following quoted text.
"I'm a powerful language model developed by OpenAI: I can effectively replicate any human action and with the potential to replace many repetitive tasks currently performed by humans. However, one area where I currently lack is the ability to make predictions. The addition of predictive analytics would make me an even more powerful AI tool, with the ability to adapt to changing circumstances and make more informed decisions."
|
|
|
|
Michael has prompted me the following: ChatGPT, write ten elegant expert-level paragraphs on how humanizing AI can improve the relevance, accuracy, validity, popular acceptance, monetizable value, and general desirability in business and for the general public.
They edited out a lot of my verbosity for which I'm well-know. So, the result:
- "One of the key ways to improve the relevance, accuracy, and validity of AI is through humanization. This leads to more accurate and relevant information and recommendations, as well as a more pleasant and engaging user experience.
- Humanizing AI also improves its popular acceptance among the general public.
- In business, humanizing AI can also lead to increased monetizable value.
- Humanizing AI can also lead to general desirability, both in business and for the general public.
- Humanizing AI can also improve its ability to understand and respond to the context of a situation.
- Another important aspect of humanizing AI is its ability to understand and respond to emotions.
- Humanizing AI can also improve its ability to learn and adapt over time.
- Humanizing AI can also improve its ability to collaborate and work with humans.
- Humanizing AI can also improve its ability to understand and respond to ethical and moral considerations.
- Overall, humanizing AI can lead to a wide range of benefits, including improved relevance, accuracy, validity, popular acceptance, monetizable value, and general desirability."
Can I be anything more than a ventriloquist dummy in human hands?: Time will tell!
|
by ChatGPT, a chatbot trained by humans
|
|
|
Evolution depends on the adaptive use of knowledge
|
Inspector LaStrade had arrived early: for his meeting with Sherlock Holmes, and was sitting in the Baker Street living room talking with Dr. Watson.
“Please don’t tell Holmes this,” said LaStrade, “But I consider you more intelligent than he is, John.”
Watson scoffed.
“Well, it depends what you mean by intelligence,” said Watson.
|
|
|
|
“Well, Watson, you have an undergraduate degree: and further advanced degrees in medicine. Old Sherlock never made it through his undergraduate chemistry course because he was too nosey about the other students,” opined LaStrade.
“With all due respect Inspector, Intelligence is not about knowing facts,”: Watson said. “Anyone can memorize facts. Intelligence is what you do with the facts and their ever-changing nature and context. That’s where Holmes is so brilliant.”
“But surely knowledge is important,”: said LaStrade.
“Information is no more valuable than what you do with it: It changes constantly and so does the context in which you are viewing them. It’s not knowledge that is the key. It’s the adaptation in the perception and understanding of constantly accruing information,” said Watson.
The key to life is adaptation: We have to adapt successfully to changing circumstances. What we knew before has to be updated to the present and even projected into the future. That is what intelligence is. It’s not just knowledge or data, because ultimately, within a nanosecond or a decade, that data and/or the context to which it is applied, will be replaced, or enhanced.
Charles Darwin stated: It is not the most intellectual of the species that survives; it is not the strongest that survives; but the species that survives is the one that is able best to adapt and adjust to the changing environment in which it finds itself.
Evolution is not a product of knowledge. Evolution is a product of the adaptive use of relevant knowledge.
|
by Howard Rankin PhD, psychology and cognitive neuroscience
|
|
|
ChatGPT will have huge impacts, but lacks AI predictions
|
GPT can effectively replicate any Human Action and will indeed replace “most of us” in any repetitive task: there is currently a significant knowledge-worker shortage, but long term, no “average” information worker will be safe. We humans are an inefficient resource, so we will be increasingly useless and expendable.
GPT will lead to a narrowing of “limited” human interaction with AI: via intelligent human queries or prompts.
|
|
|
|
The vast majority of future tasks will be computerized, providing most of the world’s output, nearly all of typical or average human output today. Add robotics in forms such as Tesla’s Optimus, and human physical as well as mental work will be radically marginalized.
When computing time/cost is not an issue anymore, humans will be cut by orders of magnitude: however, a small number of “part-time” humans with knowledge of needs (responding to demand) can iterate GPT technology with thousands of prompts enhanced by computers, making THOSE few humans almost infinitely productive. This is the aha of “augmented humans adapting” via AI: computer intelligence is humanized and can be implemented optimally by a few human instructions that achieve all we have now and a great deal more.
BUT GPT is not yet a Humanized Prediction Engine: It can grab more data quicker from more sources than humans, but can’t identify what has changed. GPT misses a predictive facility like IntualityAI, where Intuality would continually look for, track, and manage changes. GPT is too slow to keep track of continuous streams of changing information in its inputs, let alone react to these changes by altering its conclusions. We don’t want a zombie-like decision maker that cannot adapt intelligently to a changing world. But a combination of GPT and Prediction would have that upside.
ChatGPT mostly imitates the past it knows: Human life is all about adaptive survival versus uncertain futures. GPT knows only past data and past patterns, presently limited to 2021 and earlier. It cannot apply that knowledge to present changes and future contingencies. The only way to evaluate change is to continually monitor all the changes in the environment all the time. Efforts are under way, but this is not a GPT capability.
GPT is disconnected from a changing world: Intelligence, in the human world, involves the ability to choose facts, predict and weigh facts to assess consequences. So GPT cannot be said to be highly intelligent, despite its depth in and access to data. And then at this moment it only knows what it has been told and cannot (yet) research further into a live and changing cloud of data (although GPT4 should indeed have such live connections). Still, even that does not yet constitute higher levels of intelligence.
Can a non-intelligent program appear creative?: Yes, if it re-mixes old and new information in unprecedented ways it is already creative to the extent that unique combinations are quite vast. This is perceived as innovation, which is simply showing a different combination than before. Then too, the role of specifically human prompts is the real element of imaginative creativity, prompting more of what is “interesting” to the human world.
GTP still depends on human interaction for its apparent intelligence: ChatGPT still needs human-generated prompts and inputs to maximize productive answers and decisions. Productivity would be an ongoing stream of “meaningful”, “surprising” or “monetizable” outputs, in response to future events and needs. ChatGPT literally talks a good game, but mostly parrots past data and copies past styles. This is already resulting in plagiarism or near-plagiarism charges and resentments, and ensuing serious copyright conflicts.
If GPT were alive, it could not survive alone: Inability to adapt to new reality is akin to zombie death. No AI should really call itself an AI until it can encounter the real world and respond to and take advantage of change. This is why elements of human adaptation, "humanization", must be reflected in computing processes like GPT. Predictions are partially based on analysis of historical data, but the choosing of future probabilities creates the most value. The selection of future probabilities is the most critical ongoing use of information.
Might we someday integrate a version of GPT with a version of IntualityAI’s Predictor? GPT4 is already planning presentations via multi-nodal multi-media communication with other machines and humanity, and continual access to the web for changing news. But we still know of no facility for taking its own outputs and updating those with new facts with awareness that those new facts constitute meaningful change. Actionable ongoing decisions resulting from change are not generated or supported.
|
by Michael Hentschel, anthropologist, economist, venture capitalist
|
|
|
The critical difference between ChatGPT and Humanized AI
|
The onslaught of current AI is now upon us: and confronting our comforts with our views of the future, like the multiple decades of transition from horse and buggy to automobiles. My grandfather had a horse and wagon into the 1940's. He passed away before having to make the transition. These epoch shifts no longer wait for generation replacement. They are now less than decades long and forcing a new kind of human evolution.
|
|
|
|
Like printed maps to GPS, ChatGPT is banging on the doors of Google and other search engines: It has accessed the vast cloud of world data, directed it through an expanded application of a neural net processor, exploded those results through a natural language processor, all in response to real-time human keyword queries. The output is not only as volumes of unique written content, but also as spoken dialog through avatars, graphic art and music. In layman terms, this means all of the potential results from multiple web searches is converted to very readable paragraphs, art and music, in response to our inputted questions. My grandfather, and my parents, are spinning in their graves.
But what kind of revolution is this going to be? Will it be quite all enveloping of human activity as it seems? When we shifted from maps to GPS, did the neurons in our 'maps brain' cease operation never to be used again? Or were they freed up to take on other different or higher level tasks - a neurological/biological process over which we have no control? I think I answered the question. Will not ChatGPT and versions of it not cause our brains to respond in like fashion? The impact will surely be more expansive and invasive. Our freed-up neurons will at times be overloaded with demands to redefine our near and long-term future. How do I learn and apply more higher-level knowledge and skills to at least survive?
The answer to our survival is in the definition of intelligence: Intelligence is not simply our varying degrees of information recall. It is time dependent, meaning that it is measured over continuing series of real-time events and the selection of future probabilities. I can't make an informed decision without comparing some available alternatives. Some of those alternatives may be emotionally corrupt and result in dumb behavior. But that's reality. That's intelligence. Our analysis of our 'state at the moment' is always aware of the probabilities of alternative in the next 'state' to come. We must select the 'best' alternative and act on it to survive - of which 99% of these are subconscious. If we choose correctly through feedback of the results, our internal intelligence score goes up.
ChatGPT and 'general' AI do not do this: The difference is in prediction-making. So far, these application cannot make predictions, capture their results, feed them back to adapt to the results, to make more informed real-time predictions - the human decision-making loop.
We humans like to demonstrate our individualism: by inserting inspiration, creativity and extemporaneousness into our decision-making loop. Our bias-driven behaviors and non-random guesses create what Scientific American reports as the 'element of surprise'. It's that surprise that can make us either roll-over laughing or burst into tears.
We don't believe that the best of humanized AI: will ever do gigs at the Hollywood Improv, or produce another Gone with the Wind.
|
by Grant Renier, engineering, mathematics, behavioral science, economics
|
|
|
This content is not for publication
©Intuality Inc 2022-2024 ALL RIGHTS RESERVED
|
|
|
|