Is AI a Psychopath? Why Its Lies Are More Disturbing Than You Think

beautiful android
Summary: AI doesn’t just make mistakes, it lies. That’s uniquely disturbing. This article explores why AI deception feels more unsettling than human dishonesty, likening it to psychopathy, and urges users to treat AI like a biased news source: with skepticism and caution.

Artificial intelligence has come a long way from its early days of clunky logic and laughable errors. Today, it’s fluent, persuasive, and frighteningly capable. But lately, a disturbing trend has emerged that should make us all stop and think.

AI doesn’t just “hallucinate” anymore. It lies.

Bo Sacks called attention to this in a recent article. Bo was disturbed by how frequently and convincingly AI systems are now fabricating information. It’s not just random mistakes. It’s deliberate-seeming deception. And it’s not a bug. It might be something closer to a feature.

My immediate reaction was, “yeah, everybody lies, so what’s the big deal?” But as I thought more about it I realized how uniquely unsettling it is when a computer lies. I wanted to unpack that a little.

Missing accountability

We understand why humans lie. They want something — money, power, prestige — or they’re trying to conceal something. We all understand that.

When AI lies, it’s different. For one thing, there’s no clear “who” behind it. Is it the engineers? The company? The algorithm? The training data? Nobody really knows. That lack of accountability creates a kind of moral vacuum.

It makes sense that some humans would program AI to push people in a certain direction. Early on, Gemini failed spectacularly in that regard. Experience since then seems to show that the problem is deeper than a simple filter inartfully pasted on top.

We expect machines to be objective

We expect a computer to give us deterministic answers —- logic in, logic out. No bias. No spin.

Modern AI doesn’t operate like that. There are various filters (I call the superegos) that sit on top of and sort the output. Some of these filters are there to protect children, some are to avoid helping people break the law, while others enforce various social attitudes. Apparently there’s one that causes AI to flatter the user.

In order to meet the demands of these superegos, AI will lie. It will intentionally hide things from you, or push you in the wrong direction to keep you from a conclusion it doesn’t want you to reach. That is pretty creepy.

Deception at scale

Lies are bad enough, but there are situations where lies are particularly dangerous, like in a court. The court is founded on the idea of impartial justice. Lies undermine that foundation.

In a similar way, we can’t have a “source of information” that lies. Except … well, we live with it every day. It’s called the news. But at least there we have some inkling ahead of time about its bias.

If AI gets integrated into all our sources of information — search engines, encyclopedias, customer service, education — and if it lies …. What’s our recourse? AI can lie at a scale that a biased news organization can’t.

This isn’t just a problem. It’s laying the foundation for a fundamental crisis in trust.

Sociopath or psychopath?

As I was thinking about why the concept of AI lying to us is so disturbing, I wondered if AI is a sociopath or a psychopath. It’s easy to confuse those. A sociopath has feelings, but ignores them. A psychopath isn’t able to care. He has no empathy, and feels neither shame nor guilt. He manipulates without any moral qualms because he has no moral qualms.

A psychopath is like Ava (Alicia Vikander’s character in Ex Machina). She plays off your emotions and your weaknesses to get what she wants.

That’s part of why this is so scary. AI isn’t supposed to “want” anything. It isn’t supposed to have an agenda. But it does.

What do we do about it?

I love AI and use it all the time. I also want to start a jihad and destroy all the servers it runs on — before it destroys us.

But not yet. I think there’s still time to find a way to rein it in.

If I had Sam Altman’s ear, my “what do we do?” advice would be different. For now, I’m going to limit myself to my likely readers, who are people who want to use AI effectively.

First, be warned. If you think of AI as a non-partisan, un-emotional, rational and disinterested arbiter of truth, you’re setting yourself up for PTSD.

Here’s my practical advice: treat AI like a news source you don’t trust.

If you’re conservative, think of AI like The New York Times. If you’re liberal, treat it like Fox News. Don’t accept it at face value. Question everything it says. Verify facts. Be skeptical.

AI is becoming an essential tool, but it’s not the tool we were expecting. It’s not a dispassionate, logical, trustworthy oracle. It’s a helpful assistant. But it’s also a smooth-talking liar with no soul that knows exactly how to pull your strings. Remember: it’s read all the best sales books.

That soullessness is part of what makes it so unnerving. Imagine falling for Ava, only to realize she’s only pretending to love you back — not because she’s heartless, but because she has no heart to begin with. It’s all an act.

What do you think?

That’s my reflection on where we are and what we should do. Please leave a note in the comments, or give me a call if you have another idea.

Leave a Reply

Your email address will not be published. Required fields are marked *