December 9, 2022

At what point does historical data become irrelevant?, by Howard Rankin

We have a behavioral defense against unexpected failures in remembering facts, by Michael Hentschel 

Hurricane Katrina is remembered when we discuss the potential of the next weather threat, by Grant Renier

A negative week for the portfolio but substantially better than the S&P 500 Index

At what point does historical data become irrelevant?

“I consider that a man's brain originally is like a little empty attic, and you have to stock it with such furniture as you choose. A fool takes in all the lumber of every sort that he comes across, so that the knowledge which might be useful to him gets crowded out, or at best is jumbled up with a lot of other things, so that he has a difficulty in laying his hands upon it.”

-Sherlock Holmes

Sherlock Holmes is often quoted as referring to the “brain attic”, and specifically his warning not to fill the attic with too much “irrelevant stuff”. He notoriously limited his interests so that his attic would not become overcrowded with unnecessary information that had outlived its usefulness. If he were around today, Sherlock would clearly see the parallel between cognition and the need for extra RAM and cloud storage in our technological devices.

Computers are still largely designed to work the way we do: to prioritize the importance of data by trying to remember only what is remarkable or valuable. And most often, distant events

I'm an image

 are not remembered accurately or at all. Unlike computers that only forget what they are told, humans accidentally AND preferentially forget, and along the axis of time, memory decays.

Memory Decay is a critical concept both for analysis, prediction and the aging brain. The key question is, “at what point does historical data become irrelevant?” Moreover, not just irrelevant but unnecessary or even detrimental.

In the human brain, the rubric seems to be that if you haven’t “used” the memory for a while, it gets eroded if not discarded. What was the name of your chemistry teacher in third grade, fifty years ago? It’s okay, if not beneficial, for you not to use up valuable storage space in your brain remembering this irrelevant detail. Save some space for more important data like your wife’s birthday.

History can be overrated. Some historic data might be useful, but some of it might be misleading as well as energy and space consuming. So what if the hottest day ever recorded in London in July was in 1951? How relevant is that for today’s analysis of Britain’s current climate?

The greater the past data available, the more there is to forget. Unstructured data is not even easily searched, let alone interpreted. Missing data means that rational decisions may be rendered impossible, but where decisions must nevertheless be made, short-cuts like intuition and simplification allow imperfect conclusions.

Another problem with historical data is that the context changes over time, and what happened way back when may have absolutely no relevance to today’s environment and questions.

by Howard Rankin PhD, psychology and cognitive neuroscience

We have a behavioral defense against unexpected failures in remembering facts

When Computers are missing key data or are fed misleading data, they struggle to deal with such gaps. They can be made unable to forget, but we have to teach AI to fill in probabilistic and unexpected values when rational systems stumble over data decay in the real world of humans. Many years ago, fuzzy-logic computer chips were invented, and it was seen as the beginnings of the AI revolution able to deal with imprecise inputs and outputs. However, learning about imprecision is only a small part of the irrationality that is part of human reality.

When Humans encounter memory decay every day, they adjust for it and replace it with experience and wisdom. Humans have developed a behavioral defense against natural or unexpected failures in remembering facts or losing key facts. They adjust their behavior with caution or courage to play within a probabilistic universe they have come to understand. They expect uncertainty and fuzzy language and can generalize or overcome data irregularities and conflicting information, and still make effective decisions. Humans must act, whereas machines tend to just advise.

Machine-intelligence advisory roles are quickly morphing into quasi-independent AI-decision-driven action. This may already be an unstoppable trend, as the complexity of the decision inputs and processes decided in real-time exceed any human capacity to make similarly efficient decisions and take instant action. However, this is obviously a worrisome potential form of uncontrolled decision-making as machine-world and human-world merge. People will get lazy or greedy and delegate all work and all decisions to the most efficient performers, as programmed by us.

Will rational computers make fewer mistakes in applying our programming to run our irrational world for us? The translation and transition seems contradictory and impossible, and at the very least dangerous, because we make so many mistakes ourselves. Memory permanence and open availability forever to all would be a great contribution to a future when many of our past mistakes can be maintained as unchangeable history for every one and every thing to learn past lessons to apply to our mutual future.

Memory Decay is often an important Benefit. Among the benefits of memory decay is that we live in it continually and welcome forgetting what we don’t want to or need remember, allowing us to focus. Or it may be a sense of benign forgiveness, as human relations become strained over long periods when there cannot be clean-slate cooperation or dealings with former enemies, or when prior information is no longer desirable or relevant.

Memory Decay is often a Useful Benefit. Most potentially useful data can and must be stored, but intentionally ignoring older computer data is of immediate benefit where irrelevant big data clogs the system. That assumes the computer is able to identify useless data, and there is currently no self-aware guide in machines that can judge, so we keep everything for as long as economically possible. IntualityAI has studied how humans calculate and intuit relevance, and humans are actually quite adept at prioritizing data: humans naturally tend to forget the distant past. To efficiently succeed in prediction, IntualityAI research clearly shows that the most recent change data gives the best predictions. Too much effort to synchronize older data with more recent data indicates a “relevance decay” in older change vectors is analogous to how the human world works best.

Einstein once said: “There is no Need to Remember Something we can look up” This refers to the simple security inherent in a trustworthy way to store and retrieve information at will, be it written or computerized. But our human memories are limited and fallible, subject to a memory decay not just of losing memories, but of re-writing them in our minds with misinformation (false information that misleads), disinformation (correct information that misdirects), or even our own imagination (fabricated information). All these are a part of our irrational human existence, not easily understood or relatable by rational machines or AI alone. Being misled can be intentional or unintentional, but it is another variable in attempting to “true the data.”

Humanized AI might one day be more than an extension of our physical and mental abilities. The idea that rational computers could ever fully understand and predict our irrational universe except in unreliable probability projections (always wrong part of the time, by definition) is unlikely given that we hardly can understand and predict for ourselves. But with all those reliable memories and physical and mental capacities combined with our intuition and tolerance of and even passion for the challenges of uncertainty, the combination should be a serious improvement in the chaotic disconnected decaying world today.

by Michael Hentschel, Yale and Kellogg anthropologist, economist, venture capitalist

Raw data collected from our real world has embedded asymmetry

Humanized AI divides memory into short-term and long-term functions within the memory decay bias. Different values control their decay rates.  Our research over four decades, supported by published papers, has demonstrated that human memory is very forgetful of ‘recent’ event data as opposed to those memories that are transferred to long-term memory through repeated stimulation and reinforcement. New event data is discounted (forgotten) at a fast rate, while stimulated and reinforced data is more slowly discounted into long-term memory of accumulated historical events.

The paper Short-Term Memory to Long-Term Memory Transition in a Nanoscale Memristor, authored by Ting Chang, Sung-Hyun Jo and Wei Lu, published on August 23, 2011, states:

“. . . forgetfulness is not always a disadvantage since it releases memory storage for more important or more frequently accessed pieces of information and is thought to be necessary for individuals to adapt to new environments. Eventually, only memories that are of significance are transformed from short-term memory into long-term memory through repeated stimulation."

IntualityAI's short and long-term memory decay closely resemble their results.  The chart shows plots of various hypothetical input of 100 input units

I'm an image

 of data for 20 events. The curves going from left to right show the effect of its memory decay function: the blue curve is the basic effect of one-time data decaying with no repeating of the same data, while the yellow curve is the effect of continue reinforcement from continue input of the same data. The flattening of the curves as they transition from left to right indicates fast initial discounting followed by slower discounting to the right side of the chart. The current values of all inputs memory status become the inputs for the next new event.

Significantly, IntualityAI has demonstrated its 'forgetfulness' is consistent for data inputted over a wide range of applications, in millisecond inputs for EEG-based epilepsy seizure prediction, to weekly inputs, like the investment markets, elections, economics and demographics, even play-by-play events in NFL football.  

In behavioral terms, when initial new data input value are very high compared to other simultaneous inputted data (the yellow curve) and there is continued reinforcement, Humanized AI would consider this to be a 'traumatic' event that, as updating is continued, becomes dominant within the application.

Hurricane Katrina is remembered when we discuss the potential of the next weather threat.

by Grant Renier, engineering, mathematics, behavioral science, economics

This content is not for publication

©Intuality Inc 2022-2024 ALL RIGHTS RESERVED