Hey {{first_name}},
I spent 30 minutes editing a Claude post this morning.
The irony wasn't lost on me.
I'm about to teach a webinar on using Claude for content. Been sick all week, finally recovered, sat down to create an example post to show people.
Claude gave me slop.
Obviously AI. The rhythm was off. The voice wasn't mine. The tells were everywhere.
Thirty minutes of editing to make it sound human.
This is supposed to be the good one.
More data is making AI worse, not better.
Counter-intuitive, but watch what happens when you feed an AI more examples of "good writing."
It learns patterns from millions of posts. Most of those posts are average. Some are terrible. A few are excellent.
The AI can't tell the difference.
So it blends them all together into perfectly mediocre output that sounds like the average of everything it's seen.
More training data means more mediocrity to learn from.
The tells multiply because the AI is learning from content that already has tells.
It's like learning to cook by watching a thousand average home cooks instead of one professional chef.
You learn to make edible food. Never great food.
This is why Claude posts are getting more obvious.
Six months ago, the output was cleaner. Fewer people were using AI, so less AI slop was in the training data.
Now? Every platform is flooded with AI content. Claude learns from that flood.
The tool that was supposed to save time now costs me 30 minutes because it's learning from millions of people who also can't write.
And I'm about to teach people how to use it.
The system I'm teaching uses Claude in a way that avoids learning from the slop.
That's the difference.
Jack
