The AI Slop Economy (And Why We're All Complicit)

I received an 80-page document last week. The person who sent it hadn't read a single page. This is the AI slop economy, and we're all participating in it.

Jaydyn Rosevear9 min read
AI is making lazy people lazier and productive people unstoppable. A diagram showing two diverging paths.

I received an 80-page document last week.

My colleague generated it in an hour using ChatGPT. When I asked if they'd read it, the silence told me everything. They hadn't. They just hit generate, exported to PDF, and sent it my way.

This is the AI slop economy. And we're all participating in it.

What AI Slop Actually Looks Like

You can spot AI slop immediately if you know what to look for. The em dashes scattered throughout like confetti. The phrases that sound professional but mean nothing.

"This isn't a pressure email."

"You made the decision to back yourself."

"Life gets busy, priorities shift."

Humans don't talk this way. We just don't. These are fluff filler words that AI uses to pad content, and when you see a lot of fluff, you know AI drafted it.

The most egregious examples? Emails that still contain "insert name" or "insert action item" in the body. Someone literally copy-pasted from ChatGPT without reading what they sent.

A 2025 study of 65,000 URLs found that 52% of newly published articles were AI-generated. The slop economy isn't theoretical. It's already the majority of what we encounter online.

The Disrespect Factor

When someone sends you content with "insert name" still visible, they're telling you something.

They don't give a shit about your time.

They think you won't notice.

They assume they're clever for using AI this way.

But here's the thing about intelligence: it's not about knowing how to prompt ChatGPT. It's about being a critical thinker. It's about thinking something through from start to finish.

By sending AI slop, you're not demonstrating intelligence. You're demonstrating the opposite.

I'm an AI educator. I teach people how to use these tools properly. And this is exactly what not to do.

The Economics Behind the Madness

The average cost of producing a 2,000-word article has dropped 44% since 2024, from $480 to $268, thanks to AI assistance.

When you can produce content at half the cost, quality becomes irrelevant to the business model.

Analysis identified 278 YouTube channels dedicated entirely to AI slop that have collectively garnered over 63 billion views and 221 million subscribers, with estimated annual ad revenue reaching $117 million. The top channel alone potentially earns $4.25 million yearly from AI-generated content.

This reveals the structural incentive problem. Speed pays. Volume pays. Quality doesn't.

People in my company produce 30 to 80-page documents entirely generated from ChatGPT. Standard operating procedures. Workbooks. Email sequences that run 10 pages long.

They're producing these documents in the space of an hour.

Did they read what they created? No. They just needed to check a box that said "document created."

The Efficiency Trap

Here's what people think they're doing: taking the fastest path to the outcome.

Over the past few weeks I've been speaking with the operations team about our email workflows. The team spent an entire day creating workflows, smart lists, tags, country codes, and triggers. Hours of technical work to ensure everything functioned perfectly.

Then they drafted all the email content using ChatGPT. No editing. No human touch.

What was the point? The goal was to get people to open emails, read the content, and engage. If we're going to build all that infrastructure only to fill it with garbage, we've failed at the one thing that actually mattered.

The human connection piece is what people sacrifice because it requires the most thought.

Building workflows becomes repetitive. You get so good at it that you don't have to think. Content is different. No matter how long you've been writing, you always have to think about what you're putting on paper.

The Critical Thinking Crisis

A Microsoft study found that higher confidence in GenAI's ability to perform a task is related to less critical thinking effort. While GenAI can improve worker efficiency, it can inhibit critical engagement with work and potentially lead to long-term overreliance on the tool.

If you don't use it, you lose it.

Back in 2019, I was the head of a franchise company. I started a franchise with 14 gyms. One of my primary jobs was drafting franchise agreements, master franchise agreements, and lease agreements. My head was heavily in the legal space.

It's been five years since I've had to do that. Recently, I had to sit down and draft some legal letters.

I struggled. I struggled because I hadn't done it in five years.

Now imagine when you take something as critical as critical thinking and outsource that skill to AI. What happens over time when you're just sitting around, not really thinking?

Your brain dies. You forget how to think critically. You forget how to dig deep in tough situations.

You become a weak person.

The Moral Dimension

There's a character dimension to how we use these tools, not just a quality issue.

If you were already a lazy person, AI is your best friend. You just figured out a way to become even lazier.

If you were already a productive person, you're using AI to become considerably more productive.

You have two different operators:

People actively using AI to become lazier, do less, and think less.

People using AI to do more, become more productive, work out ways to reduce expenses and increase productivity, and generate wealth.

Weak people and strong people.

What Responsible AI Use Actually Looks Like

I use AI heavily. We have custom agents for operations, finance, marketing, and customer service. We're very deep in the AI space.

Today I used our finance agent. The first thing I did was spend one hour creating an entire brief inside a Google document for this agent. Six pages. That was the initial prompt I sent to the finance agent.

I had two options:

Take the time to curate a perfect brief, ensuring I extracted all the right data.

Have a back-and-forth conversation with the agent, which would consume tokens, context windows, and cost the company more money and time.

The thing about prompting is that if you go back and forth, you invite more questions that can take you on a different path from what you actually wanted.

Most people don't understand this. They don't understand how AI thinks. They don't understand what context windows are. They're not using the right tool for the right job.

ChatGPT is a great brainstorming tool. I wouldn't trust it to write a legal letter the same way I would trust Perplexity or Claude. They're different tools.

You wouldn't give an accountant access to run your Facebook ads. An accountant is good at running numbers, not marketing. In the same way, you wouldn't trust ChatGPT to write a legal letter.

AI is good at pattern recognition. If you tell AI, "What's the next fruit? Apple, banana, grape," it will probably say "orange." If you say "Apple, Microsoft, Amazon," it will probably say "Meta" or "Facebook."

The patterns are different. It's based on pattern recognition, not original thinking.

When someone generates an 80-page document in an hour, they're getting 80 pages of useless content they haven't read. What they're not getting is quality, curated content.

Just because you sit with your AI agent for 10 hours a day doesn't mean your AI agent understands every aspect of what you do or how something works. Only you do.

99% of people don't know how to communicate with AI in a way that produces good outcomes. They're using AI not just as a productivity tool but as a thinking tool.

The Complicity Question

Humans can only identify AI-generated content correctly 53% of the time. Yet detection tools achieve 89% accuracy on fully AI-generated unedited content.

The gap between what's detectable by machines and what humans notice reveals our complicity. We're accepting what we could identify if we cared enough to look.

People probably don't see the problem. But sending unread content is a foolish thing to do. People will judge you on what you send, not on what you meant to send.

We've normalized this behavior. We've accepted AI slop as "good enough."

The cultural tide is turning. In January 2026, Bandcamp banned AI-generated music entirely. Clothing brand Aerie pledged "No AI-generated bodies or people" in October 2025, which became their most-liked Instagram content of the year.

But are we turning fast enough?

The Long-Term Implications

Research published in Nature describes "model collapse." What happens when AI systems are trained on AI-generated data instead of human-created content. The result is a "degenerative loss of distribution tails and diversity."

AI trained on AI output becomes progressively worse, losing the nuance and variety that came from human sources.

Some analysts estimate that high-quality human text suitable for training may be effectively exhausted between 2026 and 2032. Models trained on the old internet, before all this slop, may have a permanent edge.

We're not just degrading current content quality. We're poisoning the well for future AI systems.

A 2024 MIT study by Nobel laureate Daron Acemoglu projected only a 0.5% productivity increase over the next decade from AI adoption. A Boston Consulting Group field experiment using GPT-4 found that AI improved performance on tasks within its capability boundary but reduced performance on tasks just beyond it.

The decline stemmed from overreliance on plausible but incorrect model outputs.

In some organizations, AI is driving unprecedented increases in productivity and employee satisfaction. In others, AI is cited as the primary cause of reduced productivity, growing employee brain drain, frustration, and attrition.

The difference? Whether people are thinking or just generating.

Why I Care Enough to Call This Out

I'm staking my entire future on AI. My entire day revolves around what happens in the AI space. I'm a huge advocate of what AI can do, and I don't think AI is going anywhere.

But in order for AI to truly benefit workplaces, to truly benefit people, for people to stay employed and valuable in the workplace, we need critical thinkers.

We need critical thinkers in the world of AI.

How do I reconcile being both a champion of the technology and someone calling out how badly people are misusing it?

I care. I care enough to call it out.

The AI slop economy exists because we allow it to exist. Because we accept 80-page documents generated in an hour. Because we don't push back when we receive emails with "insert name" still visible. Because we've decided that speed matters more than substance.

We're all complicit in this race to the bottom.

The One Rule

Here's the rule I now live by:

If I'm sending it, I've read it. Every word. If I haven't, it doesn't get sent.

That single rule has stopped me from being part of the slop economy. It forces me to think. It forces me to edit. It forces me to engage with my own work before I ask anyone else to.

Adopt it tomorrow.

You'll be shocked how much of what you were about to send is garbage you wouldn't put your name on if you'd actually read it.

That moment of shock is the moment you stop being part of the problem.

The Question You're Probably Asking

"Well, did you write this article with AI?"

Yes. I did.

But here's the difference.

I built a custom AI tool whose only job was to interview me. For an hour, it asked me questions, pulled stories out of me, probed for context, and forced me to articulate things I'd been carrying around in my head for months. Every story in this article came from me. Every opinion. Every framework.

Then I curated every single paragraph. Every topic. Every story. I understand the context. I understand the flow. I read every word before you did.

This article took me the better part of 90 minutes to put together.

That's the difference between using AI to think with you versus using AI to think for you.

The slop economy is built on the second one.

Everything good that AI will ever produce is built on the first.

#ai#opinion#critical-thinking#ai-slop#productivity