Can we use AI to fact-check politicians?

AI moderates a debate between two politicians
Summary: AI is a useful tool, but, just like journalists, it has its biases. Here are some tools to get past the AI superego and get more useful information.

Cold: Would AI have moderated the Trump-Harris debate better than the folks from ABC?

I usually avoid political topics on this podcast, but it’s Friday the 13th, so all bets are off.

Bo Sacks posted an article about how news organizations could use AI to fact check politicians. It makes some interesting points, and I agree that AI is a useful tool. I use ChatGPT all the time.

However, this is the response I posted to this article on LinkedIn.

That might be lovely — if media organizations actually cared about holding politicians to account. They obviously do not. Or, rather, they only hold their ideological opponents to account. Fox News will ‘fact check’ Harris and ABC will ‘fact check’ Trump, but even then, they lie.

Also, the proposed remedy overlooks the bias inherent in AI itself, which has been well established, repeatedly.

The bottom line is that we have to come to terms with the fact that there is no such thing as objective, honest media. It simply doesn’t exist. You have to know that what you’re hearing is slanted, bias, opinion-based [so-called] ‘news,’ and … just do the best you can with that.

Oh — I forgot another big factor. Pay attention to who advertises on news stations. Who pays the piper calls the tune.

It continues to astonish me how many people live in a news-generated bubble and don’t realize it.

Don’t we all know this by now? Don’t we know that news organizations have an agenda? If you think otherwise you’re really not paying attention.

When I say that they have an agenda, I don’t only mean a political agenda. That’s certainly true across the board – every news organization has a political agenda – but they also protect their sponsors. You won’t see critical exposes done on the people who are paying the bills.

So can AI solve this? It’s a computer right? Like Data on Star Trek. It’s objective. It doesn’t have an agenda.

Ha ha. You don’t really believe that, do you? If so … once again … you’re not paying attention. It doesn’t take long at all to find the bias in AI systems.

So what can you do?

The first and most obvious thing is to intentionally expose yourself to sources that you hate.

That alone won’t help, because there are different ways of listening. You can listen with the intent to rebut – which means you haven’t anything at all – or you can listen with the intent of understanding. That’s very hard to do, but often worth the effort.

When it comes to AI, rule number one is to check everything you hear, because sometimes AI makes stuff up. They call it “hallucinations” – maybe to humanize the process a little.

Sometimes you can get past these hallucinations by simply asking “is what you just told me accurate?” It seems amazing that it works, but it often does.

From what I know about the underlying technology, there’s no inherent bias in how large language models work. The system simply assigns multi-vector values to words and phrases so it can “do math” on language. But there’s often a kind of superego that sits on top of that and filters the results. The trick is to get past that superego.

When you’re asking AI a controversial question, there are ways to overcome the bias. You can prompt the AI to answer the question from the perspective of certain people. That works pretty well. For example, “what would Thomas Sowell [or Paul Krugman] say about raising the minimum wage?”

You can also ask something like “What are the main criticisms of the position you just mentioned.” You can get some surprisingly insightful replies with that sort of technique.

The bottom line is that there’s bias everywhere you look. Everywhere. It’s inescapable. You just need to develop some techniques to work with that.

Leave a Reply

Your email address will not be published. Required fields are marked *