January 6, 2023

People have no idea how it works but will trust it does, by Howard Rankin

Black box AI and the inescapable necessity for human trust, by Michael Hentschel

See our new website library of past IntualityAI Magazine Issues (password is intuality821)

People have no idea how it works but will trust it does

Watson and Holmes were speculating on whether the victim sprawled on the floor in front of them had experienced some sort of heart attack.

“John, what would this man have felt if he did indeed suffer a heart attack?” Holmes asked his medical colleague.

“It depends. He probably would have felt dizzy, maybe have some difficulty breathing. He might not have enough time to register it was a heart problem,” Watson responded.

“It’s interesting, don’t you think Watson, that we all assume that our hearts will work properly, but the vast percentage of people outside of your profession don’t really know the details of how it works.”

“What’s your point?” asked Watson

“Well, like most other things, we assume that our hearts are working well simply because there is an absence of apparent problems. It’s only malfunctions that test our trust in the cardiovascular system. And many people are unaware of malfunctions that indicate the system is at risk and not working well.”

I'm an image

Watson wasn’t still quite getting it and squinted at his colleagues while tilting his head.

“My point is that people work on trust and faith in just about everything. Most people don’t know how things work, and assume they’re working until there is a problem that is apparent. And many problems aren’t immediately or even ever apparent.”

Watson nodded in thoughtful agreement. However, Sherlock sensed that his colleague still did not fully understand the point of his argument.

“So, we have trust and faith that something will work until it doesn’t. Which means nothing outside bounded systems, is ever “proven”. Even the most trusted and best technology will eventually fail,” said Holmes. “It’s just a matter of time.”

“And, of course, that is how our knowledge and capabilities expand and develop,” opined Watson.

“The world is filled with infinite possibilities, but we are designed to have faith and trust in things that seem to work, because we don’t know the infinite possibilities,” said Holmes.

“That is helpful,” said Watson. “It shows me why you are so clever, Holmes. You push your trust and faith to the limits. You never assume that something is obvious, because that is a sign of faith, not reality.”

“Elementary, my dear Watson.”

There’s no reason why faith and trust don’t play a similar role in AI. Most people have no idea how it works but will trust it does, until it doesn’t or at the very least becomes outdated. And with different forms of AI operating in many diverse environments, it’s highly likely that this murky dependence on trust will lead to many divisive discussions about AI’s efficacy and its place in the world.

by Howard Rankin PhD, psychology and cognitive neuroscience

Black box AI and the inescapable necessity for human trust

Fact checkers are essential to both human and machine trust in what is stated about reality. Facts can be faked and manipulated, and facts can be mis-interpreted even with good intentions. Information is only as good as its source and as valuable as its use.

So it has become important to develop fact-checking methodologies that work. However, those methodologies need to be trustworthy, and there is plenty of evidence that fact-checkers can be as biased by inclusion or exclusion of true facts as the selection of facts themselves. Fact checkers can mis-inform and dis-inform as easily as they can inform. All depends on whose information is trusted and deemed as the truth.

“Trust but Verify” was of a bygone age, insufficient even then. Today we cannot afford to simply trust first. In a fast-moving world of sub-second decisions aggregated in real-time in computer systems replicated to masses of human beings and widely networked information systems, trust is a fleeting commodity.

If we verify everything first, we risk delays and flaws in the verification methods. Technically, our technologies should be able to maximize verification, and we may then decrease the odds of error. But there will always remain odds: a probability that our systems are trustworthy.

If we trust first, there will usually not be enough time to verify truth or substance before potential great damage. Humans and machines act and react instantly. We increase the odds of error by inserting trust, unless that trust is itself somehow reliable.

Could our human trust instincts be reliable? We were once practically limited to acting only on trust.  It might be said that our minds have long evolved to dealing with probabilistic reality by building systems of trust that served us well enough, before the Information Age. In the past, our trust has faced the probable and unpredictable events of the world with great imperfections, but arguably also with great progress.

Reality is Probabilistic. We do not expect 100% certainty in our lives. But we somehow wish that we could maximize our chances in all our desires. Thus we have learned to gamble with instinctive actions despite imperfect information. Fear and faith are among humanity’s most dominant motivators

What if our systems all become algorithms whose pronouncements and predictions are essentially Black Boxes? Our individual choices have always been individual black boxes of consideration of available facts and existing instincts and biases, so it is not all that radical to have our collective or future decision be influenced or made by black boxes that we believe to have superior reasoning skills. But it is a transition for us to let external black boxes decide for us, a transition that requires explicit trust placed in those machines. Much as we repeatedly collectively place trust in our human institutions. The results are often not what we want. So we have to insist on the most advanced and comprehensive technologies in every instance.

When complexity exceeds our ability to verify facts or even methodologies, are we just back to human trust?

by Michael Hentschel, anthropologist, economist, venture capitalist

This content is not for publication

©Intuality Inc 2022-2024 ALL RIGHTS RESERVED