Bo Sacks recently highlighted a problem we’re going to have to face in the near future: AI feeding on itself.
If AI is trained on content from the internet then as AI displaces human authors, AI will be training on AI-created content. It’s not quite Ouroboros – the mythical snake eating its own tail – because Claude will be training on content generated by ChatGPT, Gemini, Perplexity, etc. And some of the content will be lightly edited by a human.
Still, it’s a sticky problem that creates a degenerative epistemological spiral, with each iteration driving content further from its grounding in reality. Or at least human reality.
This, I think, is what we’re seeing. Authoritative content collapses when systems optimize for scale while systematically removing editorial judgment.
The problem sounds new, but it’s not. It’s an extension of the general trajectory of the internet.
Merriam-Webster called “AI slop” the term of the year for 2025, but let’s be honest. The internet has been full of slop from day one.
The currency on the internet isn’t truth, or good taste, but numbers, and those numbers can be achieved by shock, outrage, cleverness, cuteness, … and not often by truth. The bubble-headed beach blonde who has a million followers has more influence than the person who actually knows what’s going on.
We’ve admitted it to ourselves by that weird word, “influencers.” Not experts. Not competent or reliable reporters. We measure content creators by their reach.
We haven’t fallen so far that “truth has been democratized.” We still criticize morons with large followings. But the economy of the internet is built around followers and scale.
Think of this crazy train as traveling on two rails. On the first rail, the economics of the internet promote the creation of popular slop and the removal of the editor. On the second rail, the content creator (increasingly AI) falls into a self-referential loop.
We’ve been dealing with the first rail for decades. Newspapers had to sell copies, so there was always a market for giving the people what they want. That was restrained by another force: brand reputation, protected by the editor.
Even though we have to acknowledge the role of the hoi polloi, there was a restraint built into content production, and that’s really what Bo is getting at. We’re seeing the progressive removal of the role of the editor, which takes us to our second rail.
Indulge me for a moment as I paint with some very broad strokes.
Before the printing press, the cost of producing a document acted as a restraint on slop. Printing technology improved to the point that anybody with an idea could make a book, a newspaper, or a magazine at fairly low cost. The Internet brought that cost to almost zero, resulting in an avalanche of slop. AI takes it to the moon. “Make me an SEO-optimized website with everything anyone could possibly want to know about coffee,” and there it is.
What’s fading in each evolution of content creation is thoughtful curation. That is, editing.
The phrase “AI slop” caught on because it makes humans feel good about themselves. We humans have talent and judgment and style. AI is just slop.
It’s not that simple. There is AI slop, but there’s also human slop, just as there’s really good AI content and really good human content. The “fix” for AI is not simply to put a human editor on it. “Human” doesn’t mean “good.” To take a personal example, despite having a decent science education, I’d trust an AI-written article on plasma before one written by me.
This brings us back to the real issue of the second rail. The question is not “is AI feeding on itself?” but “is AI feeding on junk?” AI itself can’t make that determination. (Yet?)
Rail one pushes us towards reach without restraint. Rail two plunges us down a spiral of increasing banality.
My first reaction to Bo’s article was to say we need to disclose when content is written by AI, but that doesn’t solve the problem. “Written by AI” is neither good nor bad, provided AI is being trained on a progressively better body of curated, accurate information – rather than on more and more slop.
Here’s Bo’s article: BoSacks Speaks Out: The Uncopyrightable Future and the Feedback Loop Nobody Wants to See