Can brands control generative AI?

female robot on phone
Summary: The article discusses a controversial incident in which Microsoft’s AI-generated poll accompanied a news story about a young water polo coach’s death. The poll asked readers to speculate on the cause of her death, leading to public outrage. The author highlights the problem of inappropriate or insensitive AI responses and suggests potential solutions: review static content, conduct worst-case scenario brainstorming for dynamically generated content, and consider the disclosure of AI involvement to manage expectations and avoid offense. The article emphasizes the importance of addressing AI-generated content’s potential risks, especially in customer service call centers.

Bo Sacks sent me an email about an article in The Guardian regarding the death of a 21-year old water polo coach. Microsoft created an AI-generated poll that appeared with the post. The poll asked readers “What do you think is the reason behind the woman’s death?” and they were offered three choices: murder, accident, or suicide.

People got pretty upset about that.

It reminds me of the beginning of the movie “The Best Exotic Marigold Hotel,” in which a woman gets a marketing call, mentions that her husband had just died, and the call-center person just went on with the script as if nothing had happened.

In other words, the problem of inappropriate or distasteful responses is not limited to AI. Any system that mimics human response without human sensibilities can create the same issues. You could get a similarly inappropriate situation with an ad placement.

My friend Lev Kaye says you almost always want a human in the loop, and I agree. But since every individual on your website could be getting a different ad, or a different survey, how do you monitor this? How do you put the human in the loop?

We hear a lot about the potential of “disinformation” from AI, but “distasteful” might be the bigger threat to many brands.

It’s easy to say that you’ll review everything from generative AI before it goes live, but is that really possible? I don’t think anyone could have reviewed the technology that created that Microsoft poll. It was probably something like … when a story’s about X, look up other stories about X and see what people talk about in the comments. Make a poll out of that.

What do you do? Here are some ideas.

First, let’s take the obvious. Review things that can be reviewed, like images or static text.

Second, when something isn’t static, have a “worst-case scenario” brain-storming session. It might be a fun exercise for your employees. Then ask if dynamically generated content is worth the potential risk.

AI is going to start taking over things like customer service call centers, and that promises to save a lot of money. But is it worth the risk of a horribly inappropriate comment?

Consider this. Maybe we shouldn’t try to make these AI replicants sound like humans. If ChatGPT makes a silly mistake that a human wouldn’t make, you don’t get mad — you think it’s funny. But that’s because you know it’s ChatGPT and not a person.

If you disguise AI — and try to make it sound like a human — you might be creating more problems than you’re solving. You might be better off owning up to the fact that a computer is answering the phone and making your automated customer service system sound like a computer.

Let’s take that idea back to the poll.

What if the website said the poll was generated by AI. Would people have been as offended at its distasteful question? Maybe not. Maybe that’s the safer path.

Links

AI Ire: ‘The Guardian’ Blasts Survey Run Next To News Story In Microsoft

Leave a Reply

Your email address will not be published. Required fields are marked *