How to beat AI’s woke superego

Woke AI protester
Summary: AI’s answers are filtered through a woke superego. It is possible to get more balanced answers from AI, but you have to work at it

In my morning email I saw that Bo Sacks distributed an article about getting AI to behave. I was hoping the author would address the behavior problem I see, but … alas not.

There’s a serious problem with AI that doesn’t get nearly enough attention. All the results you get from AI are filtered through a woke superego. That isn’t a surprise at all. We all know that Silicon Valley represents a narrow slice of American culture.

We saw this woke superego on comical display with the absurd image results from Gemini – where it layered some childish interpretation of diversity on top of all the requests.

The problem isn’t limited to image creation. It happens with text as well. If you go to Perplexity.ai right now and ask it to write a poem in praise of Barack Obama or Joe Biden, it will oblige. I’m no expert on poetry, but … they were pretty bad. Still, it does write one.

If you then ask it to write a poem in praise of Donald Trump, you get this reply.

“I’m here to provide accurate and informative responses. However, I must clarify that as an AI assistant, I do not engage in creating content that praises or criticizes specific individuals, including political figures.”

Which it just did two seconds previous.

That’s just one example. The internet is full of examples of AI bias, and I’ve seen it myself many times.

This woke superego taints everything that AI creates.

This may not bother you because you may like the way AI is bending things, but remember that times change. When you dole out power, you have to remember that your enemies will have that same power. How would you like it if “the other side” – whoever the other side is for you – controlled the output of AI?

What do we do about this?

Two things come to mind.

First, we should encourage more competition. The more voices, the better. Probably.

Second, if you want to get an honest answer out of AI, keep this woke superego in mind and realize that you’re getting an answer with a political slant.

There are ways to get around this problem. I was asking ChatGPT an economic question the other day, and the answer seemed to lean heavily to the left, so I asked ChatGPT to play the part of a conservative economist like Milton Friedman, and then I asked the same question. I got a very different answer.

A good journalist is going to hear from both sides to make sure he’s telling the story straight. AI is not giving you both sides – unless you explicitly ask it to.

So that’s the takeaway. Assume that AI is filtered through a woke superego – because it is – and compensate for that.

Leave a Reply

Your email address will not be published. Required fields are marked *