Training AI to Understand Relevance: What Every Business Needs to Know

Android school
Summary: AI struggles with relevance realization, a human ability to prioritize meaning across contexts. Businesses should evaluate AI systems’ relevance handling, integrate human feedback, and ensure human support for key interactions.

In the beginning of “The Best Exotic Marigold Hotel,” Evelyn Greenslade (played by Judi Dench) has just lost her husband. In the midst of her grief she gets a call from an Indian-based telemarketer who is faithfully reading a script, which, of course, doesn’t have any place for “I’m sorry for your loss,” or anything along those lines. The telemarketer comes across as crass and inhuman.

Greenslade sees this as an opportunity. She goes to India to train the telemarketers in how to have more human interactions with their prospects.

There are some parallels between this story and what we’re seeing with AI, and it strikes me as a reasonable picture of how humans will interact with AI in the near term.

Just as Evelyn trained the human computers who mechanically read a telemarketing script, so humans are training AI systems. For example, if you want to create an AI routine that can recognize images of dogs, you have to have humans tell the AI when it’s right and when it’s wrong. It doesn’t understand the concept of “dog” the way we do. It needs to find an algorithmic way to mimic our understanding.

The more important point – as the example from the movie shows – is that the relevance of an answer goes far deeper than “is this a dog?” A human would consider so many more things. Is this dog in distress? Is it lost? Is it cute? It is a pet? Is it a threat? And so on.

Humans are able to determine what information and details are relevant to a given situation. AI doesn’t do that very well. AI piggybacks on the relevance filtering that humans have already done.

Humans can recognize relevance across vastly different scales, from deciding whether that snake-shaped thing in the corner of your eye is a threat to thinking about long-term existential goals. Determining relevance relies on a self-organizing nested hierarchy of values – which we’re often not even consciously aware of. In other words, we don’t consult a dictionary, a blueprint, or a checklist when we determine whether something is relevant. That’s exactly what a computer would want to do, but that’s not how our embodied intellect works.

As of right now, the embodied intellect outperforms the computer on questions of relevance. This means there must be a higher level of training for AI. Not “is that a dog?” but “is that comment relevant to this situation?”

AI is getting better at that all the time, but it’s a very deep and difficult question. Think of all the times in real life when you’ve been tempted to yell “that’s not what’s important right now!” Or again, think of the quiet moments when answering the literal question is precisely not what you should do. Humans get that. Or at least women do. Computers don’t.

Relevance realization balances competing constraints, such as efficiency or prudence vs. taking a risk, or local detail vs. global patterns. This allows us to avoid getting stuck in either overly rigid thinking or chaotic, unfocused exploration. (Even if it doesn’t work all the time!)

The bottom line is that these things aren’t purely computational – at least in any sense we’ve been able to compute so far. Humans manage these questions with our embodied mind, which relies on how we physically interact with and perceive the world, which in turn is fine-tuned and optimized as we pay attention, explore, or ignore. It all happens in a twinkling, and we don’t even know how.

One of the big next steps in AI development is solving this “relevance realization” problem. I suspect that two paths will prevail. First, there will be deep consultation with neuroscientists and others who study how our brains do all this amazing stuff so quickly and effortlessly. Computers don’t necessarily have to do it our way, but insights from this study will be key to the second part, which is finding ways to use computational brute force to mimic what our embodied minds so readily do.

Why this matters for your business

Since most of my readers are probably neither neuroscientists nor experts at high-end computational wizardry, what’s the practical point for you?

  1. When considering an AI system for your business, press the engineers (not the salesmen!) on how they deal with the problem of relevance. Brainstorm situations where an algorithmic approach to a problem can yield horrible results (as in the movie example), and test the system in those environments.
  2. Allow humans – both your staff and the people who use the system – to judge the relevance of the AI’s output. Keep that feedback to see if the problem can be solved algorithmically.
  3. If at all possible, provide something analogous to “get me to a human, please” – especially for your best customers! It’s one thing to leave the hoi polloi to the tender mercies of an annoying AI program. It’s another thing entirely to alienate the people who pay the bills.

Can AI do this?

Who knows? I’m completely certain that AI cannot have the inner life of a human being. If you’re curious about why, let me know and I’ll explain.

I’m not at all certain about how well AI will be able to mimic the external life of a human being. It’s likely that it will get very good at it, but I don’t know the limits. Will we get “mostly human but somewhat quirky” AI like Data in Star Trek: The Next Generation, or will we get horrifyingly convincing AI like Ruk in the original series episode “What are Little Girls Made Of?”

I’m hoping for Data.

Leave a Reply

Your email address will not be published. Required fields are marked *