Left-Brain AI, Right-Brain Humans

left-brain right-brain

Del Spooner (Will Smith) lived a life of guilt and regret because an unfeeling robot chose to save him rather than a child. “Any human would know better!”

Are we making the mistake that Asimov predicted?

Science fiction is full of dystopian futures with “thinking machines.” Is there a good future? Can we imagine the right path, where humans and AI cooperate?

Here’s my image of that future.

  • Humans still have jobs and make enough to provide for themselves. No UBI, government-controlled redistribution of wealth, or people getting paid to do nothing.
  • Humans remain in charge. AI is used as a tool and does not make final decisions.
  • Humans remain fully human. No implants. We are not the Borg.
  • AI efficiency allows humans to work fewer hours. The ultimate “work smarter, not harder.”

This vision assumes that humans offer something that either AI can’t do, or isn’t allowed to do. What would that be?

I got an inkling of the answer after listening to Dr. Iain McGilchrist. He likes to quote this line from G.K. Chesterton.

The madman is not the man who has lost his reason. The madman is the man who has lost everything except his reason.

Isn’t that AI in a nutshell? Isn’t that Spock without McCoy? Data without Picard? M5 without Kirk?

Spock: Captain, you said they would choose peace. How did you know?
Kirk: I didn’t. I only had a feeling.
Spock: A feeling is not much to go on.
Kirk: Sometimes, Mr. Spock, a feeling is all we humans have to go on.

Modern humans sometimes think of themselves in terms of their rational apprehension of the world. What they think. How they reason. That’s only a sliver of crust on top of all the internal processing that goes on in our minds … and even in our bodies. Our “rational decisions” are often more like an after-action report.

This “rational” bias leads us to dismiss things like feelings. But a “feeling” is often how that sub-rational, internal processing bubbles up to the conscious mind. We are not purely rational beings.

AI is nothing like us. It’s just logic gates and algorithms. It can’t be trusted with important decisions, as illustrated by the way it behaves in war games. AI systems often choose to go nuclear because that’s the best way to win.

Why do they do this? Because they don’t have grandchildren.

The risk with AI isn’t that it will become irrational. The risk is that it will be “perfectly rational” in a system that forgets half of reality, which is precisely what Dr. McGilchrist says is happening with our society. We’ve become too left-brain — obsessed with clarity, rules, and measurable outcomes — and have lost the right-brain sense of context, meaning, and human connection.
That’s what a computer does. It reduces multi-variate, analog reality to a set of limited, binary, digital decisions. It’s very efficient for carefully circumscribed tasks, but it doesn’t get the full picture.

Here are some examples of how left hemisphere (computer-like) thinking has invaded our lives.

Most sensible business people want to use data to inform their decisions, but the posts in my LinkedIn feed make me believe that a lot of people want the data to make the decision. That can go wrong in so many ways. It’s the left hemisphere without the guidance of the right.

We see the same thing in strict, no exceptions policies – minimum sentencing guidelines, HR rules, regulatory compliance vs. common sense, teaching to pass a test rather than genuine learning.

In many ways, we’ve already become AI. We’ve replaced judgment with checklists and rules.

AI might be the fork in the road. We’re either going to continue into this “what gets measured gets done,” left-brain hellhole, and become less human – even physically – or we’re going to become more human by insisting that logic and “rationality” isn’t the whole picture, that a human life has more value than what it produces for the economy, and that human judgment – flawed as it is – is better than a nation of robots.

Some of the tech geniuses believe that with AI they are creating a new, better lifeform, and that if it displaces humans, well … that’s the way it goes.

I say to Hell with that. Humans first. To the extent AI improves human lives, great. To the extent it diminishes or degrades them, restrict it or pull the plug.

That leaves one important question that I can’t answer. How do we get there from here?

Leave a Reply

Your email address will not be published. Required fields are marked *