How to overcome AI’s bias and get past its woke superego

Woke AI protester
Summary: AI is biased, and its answers are filtered through a woke superego. It is possible to get more balanced answers from AI, but you have to work at it

AI bias doesn’t get enough attention, and that’s a serious problem. The truth is that all the results you get from AI are filtered through a woke superego. That shouldn’t surprise you. We all know that Silicon Valley represents a narrow slice of American culture.

We saw this woke superego on comical display with the absurd image results from Gemini – where it layered some childish interpretation of diversity on top of all the requests.

The problem isn’t limited to image creation. It happens with text as well. If you go to Perplexity.ai right now and ask it to write a poem in praise of Barack Obama or Joe Biden, it will oblige. I’m no expert on poetry, but … they were pretty bad. Still, it will write a poem for you.

If you then ask it to write a poem in praise of Donald Trump, you get this reply.

“I’m here to provide accurate and informative responses. However, I must clarify that as an AI assistant, I do not engage in creating content that praises or criticizes specific individuals, including political figures.”

Which it just did two seconds previous.

That’s one very simple and obvious example. The internet is full of examples of AI bias, and I’ve seen it myself many times.

This woke superego taints everything that AI creates.

This bias may not bother you because you may like the way AI is bending things, but remember that times change. When you dole out power, you have to remember that your enemies will have that same power some time in the future. How would you like it if “the other side” – whoever the other side is for you – controlled the output of AI?

What do we do about this?

Two things come to mind.

First, we should encourage more competition. The more voices, the better. Probably.

Second, if you want to get an honest answer out of AI, keep this woke superego in mind and realize that you’re getting an answer with a political slant.

Here’s one way to do it. I was asking ChatGPT an economic question the other day, and the answer seemed to lean heavily to the left, so I asked ChatGPT to play the part of a conservative economist like Milton Friedman, and then I asked the same question. I got a very different answer.

A good journalist is going to hear from both sides to make sure he’s telling the story straight. AI is not giving you both sides – unless you explicitly ask it to.

So that’s the takeaway. Assume that AI is filtered through a woke superego – because it is – and compensate for that by explicitly asking it to give you another perspective. If you can name a particular representative of that other perspective, that can help.

Leave a Reply

Your email address will not be published. Required fields are marked *