December 16, 2022

"You’re driven by the only information you think is available", by Howard Rankin

Humanized AI seeks to maximize available hope, by Michael Hentschel 

The two-edged sword of information availability, by Grant Renier

A positive week as a result of the Hedge System protection in a down market

"You’re driven by the only information you think is available"

Watson and Homes were discussing a new case that they had been engaged to solve.

“Well,” Watson mused, “the only thing we know about the victim was that he smelled of alcohol.”

“Be careful, Watson,” said Sherlock. “Before long you will be imagining situations that are driven by the apparent use of alcohol. Which pub was he drinking at? Who was he drinking with? What was he drinking? Was he an alcoholic? Did he go to any groups like AA?”

“What’s the problem with that?” asked Watson.

I'm an image

“Well, you’re being driven by the only information you think is available. Just because it is available, doesn’t make it relevant. Trust me, more evidence will become available and until it does you cannot let what is in front of you right now, distort your thinking. It’s all about focus. For all you know, Watson, he had digestive problems and spewed out the one drink he had all night.”

“The digestive canal,” suggested Watson.

Sherlock nodded.

“Alimentary, my dear Watson.”

When we are reading material or studying information, it is the focus of our attention and, as Daniel Kahneman says, “Nothing in life is as important as you think it is, while you are thinking about it.”

Our focus and attention, not only engage available information, but typically exaggerate it, simply because it is available. For example, the news and media are heavily focused on the possibility of a recession. As you are reading all about it, the notion occupies your mind, focus and attention and tends to become overvalued. Humans can only think about one thing at a time.

Multi-tasking is simply task switching. Sure, we can change our focus in a matter of seconds, but this limitation means, as Kahneman points out above, that what we focus on is often exaggerated. Our focus generates not just ideas, but emotions and these can easily drive the thought process. For example, the ‘recession narrative’ availability, might have someone thinking about financial decisions to mitigate the impact of a recession. It might influence their business strategy or anything that might be impacted by a recession. This will happen when the media coverage is mere speculation, or has very little evidence to support the notion of a recession. We react because it is available and that gets our distorted focus and attention.

by Howard Rankin PhD, psychology and cognitive neuroscience

Humanized AI seeks to maximize available hope

Availability of information determines how much truth we receive and perceive.  Truth should not be just a matter of judgment, a factoid should be true or not true. But truth cannot be perceived until it is conveyed (available) and received (understood). This simple statement is actually a big controversy today, because truth conveyed by media (availability) involves a delivery system (manipulated selectivity) which is intentionally biasing perception of truthiness.  Humans cope with incomplete and misleading data all the time, but machine intelligence largely cannot cope. What can be done?

Availability of information determines how much truth we receive and perceive.  Truth should not be just a matter of judgment, a factoid should be true or not true.  But truth cannot be perceived 

until it is conveyed (available) and received (understood). This simple statement is actually a big controversy today, because truth conveyed by media (availability) involves a delivery system (manipulated selectivity) which is intentionally biasing perception of truthiness.  Humans cope with incomplete and misleading data all the time, but machine intelligence largely cannot cope. What can be done?

Too much data can overload both humans and computer, but too little data leaves us all uninformed. Do we have too little information, enough available information, or an overload? We are not enabled to know. Do computers and their AI-advanced-iterations know? No! Computers are still programmed by humans with human intentionality, which adds biases to the data, which in turn manipulates and limits both computer and human awareness. Do Humans know? No! Someone out there determines what information computers use, and that becomes what the mis-informed and under-informed learn, but it is not necessarily the Truth.

We are all inundated with information, most of it not relevant, but much of it directed at us with some force. Media is a major force in its massive scale and repetitive access to our consciousness, but availability is not the only force. It is the direction and intent behind the data presented (or not presented) that is the real motivational agent that can transform neutral data into actionable decisions and visible results. Such force is amplified by the extent to which the neutrality of data becomes biased in an intended direction. Suddenly bias becomes a force.

Humans have so far been trained to “watch” at most 2 data series at a time, a vast oversimplification of reality. Much more than the convergence of two variables in a graph becomes too challenging to most human observers. Thus the use of the common test of future recessions:  an inversion of long term bond yields and short term interest rates trendlines.  Focusing on such a “simple” solution, biases the outcome and the ultimate solution to just two historically effective sources of information, which usually excludes consideration of third and fourth key variables that might well countermand a decision reached by just the first two.

Humans have learned to act despite data availability gaps, but computers cannot. At least not yet. Despite human brain limitations to focus on more than a narrow amount of information at a time, humans are actually accustomed to making even weighty decisions without complete information. That is not entirely a matter of human ignorance, but can be overcome with a certain level of human wisdom gained from repeat experience, focusing on the most relevant data available to each specific challenge.

Human wisdom is contained in widely scattered information and experience that involve Human Intuition: This is hard to teach to a machine, but processing widely available information sources can be the heart of the machine’s potential performance advantage. That is why we need AI help. The computer can make the entire world its source of available information, and can multiply its focus on multiple types of data that are calculated as most relevant.

Machine intelligence finds prospects of failure challenging. Programming a computer, let alone an AI, to make creative decisions in the absence of key knowledge makes the black box syndrome even more obscure. It also increases the risk of legal liability for inadvertently or imprecisely having programmed a machine to fail. Permitting ever-increasing AI participation in real-world decisions without human intervention creates a new environment where blame for failures becomes hard to assess.

Humanized AI trains both humans and machines to focus on relevant data from the widest available data sources. IntualityAI trains its systems to monitor all the available data streams even if deep-data interrelationships have not previously been recognized as correlated. This in turn gives humans access to a disciplined set of multiple data streams and more available data than they could address or absorb by themselves. The point is that both humans and AI have thus far been deprived of the bigger picture where sometimes both humans and computers have been walking in the dark and in insufficient truth, merely speculating that available information is sufficient to explain the universe. Humanized AI seeks to maximize available hope.

by Michael Hentschel, Yale and Kellogg anthropologist, economist, venture capitalist

The two-edged sword of information availability

Humanized AI is defined by the data inputted into it. No more; no less.  There are usually other data categories that could bring added value to the application environment, either as objective data series for which output predictions and actionable alerts would be necessary, or as influencer data series that could add quality to those objective data feeds.   The user's selected objective and influencer data determine the application's environment. 

In the equity markets, for example, there are many sources of real time information about corporate status that are downloadable for analysis. But we know that there are other undisclosed activities that are ongoing that could potentially affect the stock price. We would love to get our hands on these insider transactions and have them be immediately available to the public market to determine a more accurate stock price.   

The chart is IntualityAI's 60-day prediction of COVID19 death rates in California, made on March 15, 2021. The system made predictions of the equity markets for the 

I'm an image

same time period and included them to measure their possible influence on the COVID19 predictions.  As shown in the chart, there was an influence on the death rate in the range of 2.4% to -0.8%.  This is a case where this kind of relevance was unknown until the data was  made available to the COVID19 predictions.  In hindsight, one could conclude:

1. The predicted rise in COVID16 CA deaths could create increased investment in the equity markets, ala the 3rd round of government stimulus funneling cash into those markets, and

2. The increase in equity investments could prompt optimistic but premature ‘opening up’ that could result in an increase in the death rate. 

So, humanized AI must allow the user freedom to add additional inputs regardless of their perceived influence on future performance.  The technology must be given the responsibility of determining degrees of relevance of seemingly related data. This philosophical shift towards increased data availability is contrary to the current reticence by AI developers to relinquish that 'control'.  We are afraid of letting AI know too much.

by Grant Renier, engineering, mathematics, behavioral science, economics

This content is not for publication

©Intuality Inc 2022-2024 ALL RIGHTS RESERVED