How “relevance realization” distinguishes human intelligence from artificial intelligence

Gorilla playing basketball
(I just noticed that the lady on the right has three legs! Thanks, Midjourney!)
Summary: Computers don’t understand relevance, and it’s unlikely they will any time soon. “Relevance realization” is what sets humans apart from AI, and “general artificial intelligence” won’t happen until someone solves this.

Remember that fun video where the kids are throwing a basketball, and the viewer is asked to count how many times the ball has been passed? About half the people who watch that video miss the gorilla that walked into the middle of the scene and thumped its chest.

That’s a negative illustration of “relevance realization,” which is a very important topic in the world of AI.

I like AI. I use it all the time, and I hope I’ve learned how to use it well.

At the same time, I’m well aware of the vast library of fiction warning us about the dangers of AI (see link below), and I take that very seriously. Or at least as seriously as I can take something that I can’t influence. If Bill Gates creates Skynet, there’s not much I can do about it except die in a hopeless battle and hope for a foamy mug of mead in Valhalla.

There is a strong reason to believe that AI is nowhere near general intelligence, and that has to do with something called “relevance realization.”

Here’s the problem. There’s an almost infinite number of ways to process the sensory data we’re flooded with every day. Somehow our brains are able to find what’s relevant and focus on those things.

The significance of data is a different thing than raw data collection and processing, but for now I’m going to focus on the data itself.

To get a sense of the scale of the problem, consider vision. Each of our eyes has about 120 million photoreceptors. Only about 1-2 million nerve fibers transmit that information to the brain. That means we’ve already lost about 99 percent of the data.

Now let’s look at this in terms of bits of information. The optic nerves transmit about 10 million bits of visual information per second to the brain, but the brain can only consciously process 40-50 bits per second. That’s a teeny tiny fraction of the one percent of the data we’re getting from our eyes. The ratio of data input to conscious processing is less than 200,000 to one.

There are three other things to keep in mind here.

First, the 40-50 bits of processing I mentioned above is conscious processing. Our unconscious minds process a lot more than that, and we have some weird filter that mediates between the conscious and the unconscious mind.

Second, a lot of this processing has to do with our bodies – with the fact that we have hands, for example. If we had claws, or fins, or wings, we’d process the information very differently.

Third, we don’t just react to pre-curated information, the way AI does. We react to dynamic and new situations, and quickly determine what’s relevant and what’s not.

All these things show why “relevance realization” is such an important concept. To some extent, our brains choose which tiny bits of information to focus on, based on whether something stands out, what goal we’re trying to achieve, our understanding of the situation and environment, and our past experiences and current expectations. All of that is filtered through our conscious and unconscious minds, it’s biased by the nature of our bodies, and it’s raw, uncurated stuff that can change on a dime.

Computers don’t do that. They don’t know how to pick what’s relevant, they don’t have unconscious minds, they don’t see the world from an embodied point of view, and so on.

The current crop of AI, like ChatGPT, is only able to pick relevant information by piggybacking on what humans have already determined is relevant. The data that’s going into ChatGPT isn’t random, raw data. It’s stuff that we’ve already picked, curated, and processed.

In order for AI to attain general intelligence, it’s going to have to be able to determine relevance on its own.

Think back to the gorilla video. People like to point out such things to make fun of the way humans process information. The thing is, that blindness to irrelevant information is what allows us to walk across a busy street, ignoring all the irrelevant sights and sounds and focusing on what’s going to keep us alive. It doesn’t work all the time, but it’s served us well, and for now, at least, that’s what’s keeping Skynet at bay.

Links

Warnings from Dune about artificial intelligence

3 thoughts on “How “relevance realization” distinguishes human intelligence from artificial intelligence

  1. Biological (animals for example) evolution required this ability. The constraints on short term bandwidth is in large part related to requiring real time responses in order to survive. But the consyraints on intermediate and longer term are largely due to the cost of broader bandwidth measured against benefits. The evolution of AI (which happens now far faster than biological and will have the possibility of far faster rates as dependence on biologic intelligence fades away) has far lower constraints. So while the points made are real they miss the deeper meaning that AI actually embodies. The heart of the issue you raise is at the heart of what real AI ultimately is whether the intelligence is being used to discover how general relativity and quantum mechanics are actually integrated or whether some small aparanently insignificant event is a step towards a profound development (see Asimiv’s Foundation) or whether some unnoticed event actually happened… and just why it is important to have noticed while we did notice the small rat scurry under a shed next to the basketball court … carrying a deadly disease like the plague….

    1. Thanks for the thoughtful reply. AI is certainly developing faster than animals evolve, but now that I think about it I don’t think it’s right to say that AI is “evolving.” It’s not a self-replicating system — at least not yet, and not in the same way. It also doesn’t have a body, and that seems to be an important part of relevance realization.

      There are areas where AI does seem to excel at finding relevance. For example, AI does well at reading medical scans. But is AI really doing that, or is it just piggybacking on what humans have already done?

      1. Good points but not points with “legs” (staying power). Let’s look at each:
        1. Current “evolution” of AI is essentially (and perhaps ironically) “intelligent design”.
        2. Agreed it is not a self replicating system yet. But as my lead implies and as you acknowledge… yet is the critical word. It is hard to imagine that if we get to anything close to GAI we will have self replication…
        3. Not having a body does not seem to me to be a critical point but it is also one that is not necessarily true. Not critical in as much as if it has means of translating decisions into physical action that is sufficient to act in the real world. Or if it can communicate to others such as to us it can give us information on which to act. But more than that, it is not necessarily true that it is bodiless in that if it is able to translate decisions then we might need to take a closer look at what we mean by body. Moreover it is easy to imagine that GAI could eventually “decide” to create its own body – if we have not already done that for it as with humanoid robots (or self driving vehicles that become endowed with GAI)…

        On this last point you are of the opinion that relevance realization requires having a body. I am not sure I agree. What is required is that any actual relevance have potential importance to the observer. Otherwise we could talk about a butterfly in the background rather than a gorilla or more extreme, something microscopic. If our senses are unable to detect something then the limitation is in our senses not our skills at discerning meaningful from meaningless. If it is something that we could imagine having import then the intellectual component here can be made by a third party observer not even physically present in the place and time. All it takes is the ability to understand that something Could have relevance. And no body is required. Just an understanding of the potentials of what is seen (or smelled for that matter) and a set of things that are “worth” our concern! If there were a car casually passing in the distance across a playground and across a street would that be important. Without context no. But given context, like a police car with lights flashing or the car of the parents of someone on the court or the car belonging to a known violent criminal … or an ice cream truck… then the distant thing might in fact be worth our attention. It is context. And the observer who is trying to note what things are in fact worth noting need only understand how they might impact whoever or whatever is of importance. No body is truly needed for that.

        I will agree that in biological evolution the body is inherently needed just to be there to attend “the party”. ANd having a body and so both needs and concerns for safety drive the need for the intellect to parse what their senses are picking up. But in the reality we face with the possibilities of AI, those requirements have changed.

        I should clarify – AI does need some sort of body even if we are “just” talking about its “brain”. Its memory, it s computing/reasoning hardware…

        On your last point, a good one. ANd it is the underlying problem of discerning just WHEN things transition from merely piggybacking to genuine individual capability.

        I hope there is further opportunities to discuss… and expand. And if you like please reach out outside of this thread….

Leave a Reply

Your email address will not be published. Required fields are marked *