Some publishers see generative AI as a fundamental challenge to their business.
Generative AI is certainly disruptive, but it also presents a lot of new opportunities.
As I say in my most recent letter, I believe the larger threat is from AI-curated content. Still, AI-generated content is a significant threat.
No matter where you stand on this, you should have a written policy for the use of AI in your organization.
That policy should cover general principles like …
Transparency – when and to what extent do you have to disclose that you’ve used AI?
Accountability – to what extent is a human responsible for the final product.
Data privacy and compliance – make sure you’re not feeding AI with information it shouldn’t oughta have.
Your policy should also cover how AI can be used. For example …
- As an assistant in research and analysis.
- To check grammar and spelling.
- To create or edit first drafts.
- To create or enhance images (with some limitations below).
- To create a transcript of audio or video content.
In all these cases you should give some guidelines on where AI work ends and a human’s review begins.
You should list prohibited activities, like …
- Fully automated content generation. Personally I think fully automated content is fine in certain cases – such as creating summaries of an article – so long as that’s clearly explained.
- Using AI images to depict real people or events.
- Using AI to make value judgments on sensitive issues.
The policy should say whether all content must be reviewed by a human author. That’s particularly important with fact-checking.
You might also require a certain level of training before employees are allowed to use AI.
I’ve drafted an AI policy for The Krehbiel Group which you can use as a starting point. You can find it here.