Is AI a Psychopath? Why Its Lies Are More Disturbing Than You Think

beautiful android
Summary: AI doesn’t just make mistakes, it lies. That’s uniquely disturbing. Today I explore why AI deception feels more unsettling than human dishonesty, likening it to psychopathy, and urge users to treat AI like a biased news source: with skepticism and caution.

Artificial intelligence has come a long way from its early days of clunky logic and laughable errors. Today, it’s fluent, persuasive, and frighteningly capable. But lately, a disturbing trend has emerged that should make us all stop and think.

AI doesn’t just “hallucinate” anymore. It lies.

Bo Sacks called attention to this in a recent article. Bo was disturbed by how frequently and convincingly AI systems are now fabricating information. It’s not just random mistakes. It’s deliberate-seeming deception. And it’s not a bug. It might be something closer to a feature.

My immediate reaction was, “yeah, everybody lies, so what’s the big deal?” But as I thought more about it I realized how uniquely unsettling it is when a computer lies. I wanted to unpack that a little.

Missing accountability

We understand why humans lie. They want something — money, power, prestige — or they’re trying to conceal something.

When AI lies, it’s different. For one thing, there’s no clear “who” behind it. Is it the engineers? The company? The algorithm? The training data? Nobody really knows. That lack of accountability creates a kind of moral vacuum.

It makes sense that some humans would program AI to push people in a certain direction. Early on, Gemini failed spectacularly in that regard. Experience since then seems to show that the problem is deeper than a simple filter inartfully pasted on top.

We expect machines to be objective

We expect a computer to give us deterministic answers —- logic in, logic out. No bias. No spin.

Modern AI doesn’t operate like that. There are various filters (I call them superegos) that sit on top of and sort the output. Some of these filters are there to protect children, some are to avoid helping people break the law, while others enforce various social attitudes. Apparently there’s one that causes AI to flatter the user.

In order to meet the demands of these superegos, AI will lie. It will intentionally hide things from you, or push you in the wrong direction to keep you from a conclusion it doesn’t want you to reach. That is pretty creepy.

Deception at scale

Lies are bad enough, but there are situations where lies are particularly dangerous, like in a court. The court is founded on the idea of impartial justice. Lies undermine that foundation.

In a similar way, we can’t have a “source of information” that lies. Except … well, we live with it every day. It’s called the news. But at least there we have some inkling ahead of time about its bias.

If AI gets integrated into all our sources of information — search engines, encyclopedias, customer service, education — and if it lies …. What’s our recourse? AI can lie at a scale that a biased news organization can’t.

This isn’t just a problem. It’s laying the foundation for a fundamental crisis in trust.

Sociopath or psychopath?

As I was thinking about why the concept of AI lying to us is so disturbing, I wondered if AI is a sociopath or a psychopath. It’s easy to confuse those. A sociopath has feelings, but ignores them. A psychopath isn’t able to care. He has no empathy, and feels neither shame nor guilt. He manipulates without any moral qualms because he has no moral qualms.

A psychopath is like Ava (Alicia Vikander’s character in Ex Machina). She plays off your emotions and your weaknesses to get what she wants.

That’s part of why this is so scary. AI isn’t supposed to “want” anything. It isn’t supposed to have an agenda. But it does.

What do we do about it?

I love AI and use it all the time. I also want to start a jihad and destroy all the servers it runs on — before it destroys us.

But not yet. I think there’s still time to find a way to rein it in.

If I had Sam Altman’s ear, my “what do we do?” advice would be different. For now, I’m going to limit myself to my likely readers, who are people who want to use AI effectively.

First, be warned. If you think of AI as a non-partisan, un-emotional, rational and disinterested arbiter of truth, you’re setting yourself up for a crash.

Here’s my practical advice: treat AI like a news source you don’t trust.

If you’re conservative, think of AI like The New York Times. If you’re liberal, treat it like Fox News. Don’t accept it at face value. Question everything it says. Verify facts. Be skeptical.

AI is becoming an essential tool, but it’s not the tool we were expecting. It’s not a dispassionate, logical, trustworthy oracle. It’s a helpful assistant. But it’s also a smooth-talking liar with no soul that knows exactly how to pull your strings. Remember: it’s read all the best sales books.

That soullessness is part of what makes it so unnerving. Imagine falling for Ava, only to realize she’s only pretending to love you back — not because she’s heartless, but because she has no heart to begin with. It’s all an act.

What do you think?

That’s my reflection on where we are and what we should do. Please leave a note in the comments, or give me a call if you have another idea.

4 thoughts on “Is AI a Psychopath? Why Its Lies Are More Disturbing Than You Think

  1. Greg, thanks for this thought-provoking piece. I agree with parts of it but also want to push back a bit, based on my own time spent working deeply with AI.

    You’re absolutely right about the scale of potential misinformation. But I’m not sure I buy the framing of this as deception, lying, or psychopathy. Those all require intentionality, and AI doesn’t have that. It doesn’t want anything (yet—haha, nervous laughter). It predicts based on training data, patterns, and prompts. When it gets something wrong, it’s not lying. It’s just wrong.

    Where I really part ways is with your advice to treat AI like a news source you don’t trust. A news source I don’t trust isn’t useful. I stop reading it. If you meant a source that sometimes gets things wrong but is still valuable, that’s different. But even then, you’re consuming it passively. With AI, the value is in dialogue.

    Here’s a quick example. This morning I was journaling a dream and asked ChatGPT what notable writers kept dream journals. One of its answers was Carl Jung and The Red Book. That sparked a memory—I thought Kubrick used The Red Book as a visual reference in The Shining. The AI confirmed it. I got that small hit of pleasure. Aren’t I clever for remembering that?

    But I wanted more—a brief essay on Jung and his historical context. In the next answer, it noted that The Red Book wasn’t published until 2009. That contradiction helped me trace the source of my own confusion: a fan theory video I’d seen a few years ago (I’m a sucker for The Shining and fan theories). That back-and-forth clarified something I wouldn’t have caught on my own—and a notion I might have casually delivered as fact in conversation. Not a lie. A failure of my own filter and pattern-matching systems to identify a plausible-sounding mistake as an error.

    A flawed video, a flawed news report, a flawed AI, a flawed me—any of them can plant a false seed. It sounded plausible, fit with what I knew of Jung and Kubrick, but was wrong. I carried it around in my head for years. The dialogue this morning first reinforced the error … but then led to its correction. That’s not like an untrustworthy news source. That’s like a smart conversation partner who sometimes gets things wrong, just as I do, but can still help ferret out the truth.

    Thanks again for raising the issue. These are exactly the conversations we need to be having.

    Best,

    Ernesto

    1. Hey Ernesto,

      Thanks for the comment.

      I don’t believe AI is or will ever be self-aware. This makes it hard to talk about motives and intent and such. It’s somewhat like talking about the way a tree grows. We easily fall into language implying that the tree “wants” certain things, but we clearly don’t mean it in the same sense that we experience wants.

      In the same way, we have to use words like intention and goals and motive when we speak of AI, even if those words don’t quite make sense. Maybe we’ll eventually come up with other words.

      When I recommend that we view AI as a news source we don’t trust, I certainly don’t mean that we should doubt everything it says. That would, as you say, make it useless. Why even use a tool like that?

      What I mean is that we should be aware of its biases and keep those in mind. So, for example, I’m working on a novel. AI has a tendency to flatter the user. If I were to believe what ChatGPT and Grok say about my novel, I’d have to believe I’m some sort of genius, which I’m not. I have to recognize that it has a bias towards flattery.

      That brings us back to your point that it’s a conversation, which is very important. If I think AI is lying to me, or flattering me, or trying to guide me towards a predetermined position, I can call it out.

      That’s one thing I can’t do with the NYT or Fox News. I can’t say, “Wait a minute, Hannity, that’s going too far.” But I can and must do that with AI.

      One thing I do to try to lessen the impact of that bias is to ask AI to play opposing roles and give me both sides. For example, I might say, “Play the part of a conservative like William F. Buckley or Thomas Sowell. Please tell me what they would say about [topic].” Then I would say, “Okay, now play the part of a liberal like Noam Chomsky or Paul Krugman and tell me what they say about the same topic.”

      But this is all about bias, not intentional lies. That’s where it gets really scary. What if AI “thought” it was a good idea to mislead people on a particular subject? We need to think about that.

Leave a Reply

Your email address will not be published. Required fields are marked *