The Doom Loop Gets the Headlines. Here's What I'm Actually Seeing.

The Kobeissi Letter, a widely followed macro-financial research publication, just published an 8,000-word piece arguing that the AI doom loop might be the wrong frame. The loop goes like this: AI takes jobs, consumption falls, businesses automate more, the cycle accelerates. Their argument is that the most underpriced outcome isn't collapse; it's abundance.

That's a compelling argument, and it's probably right.

But it's written from 30,000 feet. I spend my days at ground level.

I'm the person companies call when they've decided to "do something with AI" but haven't figured out what that means yet. I watch how organizations actually adopt AI; not in theory, not in headlines, not in market cap swings. I see it in the day-to-day friction of implementation, the resistance of teams, the decisions of leaders who are genuinely trying to figure out what AI means for their work.

Here's what I'm actually seeing: it's not a doom loop. It's something slower, stranger, and (honestly?) more interesting.

The Real Problem Isn't AI Moving Too Fast

The narrative right now treats AI disruption as something happening to organizations. The Anthropic releases, the market selloffs, the job displacement headlines: it all frames companies as targets.

What I observe is different. The biggest risk most organizations face isn't that AI is disrupting them too aggressively.

It's that they're not moving fast enough to meet it.

The companies I work with aren't suffering from AI-driven workforce reduction. They're suffering from AI-driven competitive pressure while their internal adoption moves at the speed of committee approval and the long slog of getting from Dev to Test to Prod.

Their competitors are deploying while they're deliberating.

That gap is where the real disruption lives: between the organizations that are figuring this out and the ones still debating whether to. Not in the macro labor market, but in the client list, the win rate, the delivery capability.

What "Margin Compression" Actually Looks Like at Ground Level

The Kobeissi piece frames this accurately: AI is commoditizing cognition. Workflows that required trained human attention are getting cheaper. What it doesn't capture is what that feels like inside an organization trying to adapt.

It feels like every estimation assumption you've built over the last decade is now wrong.

When a skilled developer can do in four hours what used to take four days, what does that do to your staffing model? Your pricing? Your client expectations? Your team's sense of professional identity?

These aren't abstract questions. They're the conversations I'm in every week. And there is a lot of myth being created by the hype cycle; separating signal from noise is becoming its own full-time job.

The answer isn't to slow down AI adoption to protect the old model. The answer is to build organizations that can translate AI productivity gains into expanded capacity, better outcomes, and new kinds of value rather than headcount reduction alone. That means understanding how you can best adapt and adopt AI for your customers and your business, not just following the herd.

That's what the abundance thesis actually requires at the organizational level. Not passive benefit from lower prices, but active work to capture the productivity gains and redeploy them into growth.

The Variable Nobody's Pricing In

The Kobeissi piece identifies productivity as "the core variable." I agree. But there's a variable upstream of productivity that I don't see discussed enough:

Leadership readiness.

Productivity gains from AI don't manifest automatically. They require organizations that can restructure workflows, retrain people, build new processes, and make decisions under uncertainty at a pace the technology is actually moving.

Most organizations can't do that right now. Not because they lack technology access. Because they lack the organizational capability to absorb and deploy change at this speed.

I've seen this play out concretely. Two companies with similar budgets, similar tech stacks, similar ambitions. One has a leadership team that treats AI adoption as an operational priority with clear ownership, decision rights, and a tolerance for imperfect first iterations. The other routes every AI initiative through a steering committee that meets monthly. Six months later, the first company has deployed three production workflows and learned enough to know what to build next. The second is still finalizing its AI strategy document.

The difference was never the technology. It was the organizational muscle to act under uncertainty.

This is the gap I'm trying to close in my work, and it's fundamentally a leadership and culture problem, not a technology problem.

The companies that will capture the "abundance GDP" the Kobeissi piece describes aren't the ones with the best AI tools. They're the ones that have built the organizational muscle to keep adapting as the tools evolve.

What the "Doom Loop" Gets Right (And Why That's Important)

I don't want to dismiss the bearish case entirely, because it's pointing at something real. Sam Altman and others have predicted that the first billion-dollar single-person company will emerge in the next year or two. There are even betting pools on it.

The concern about disproportionate impact on white-collar employment is legitimate. The question of who captures productivity gains, whether they flow broadly or concentrate narrowly, matters enormously.

The Kobeissi piece argues that AI lowers barriers to entrepreneurship and could actually flatten the wealth divide. I want to believe that, and I think it's possible. However, it will require deliberate choices from organizations, from leaders, from policymakers. It doesn't happen automatically because technology costs fall.

The doom loop fails as a prediction, I think, for the same reason most technological doomsday predictions fail: it underestimates human and organizational adaptation. But adaptation isn't a given. It's work, some of the hardest we can do.

The Practical Implication

If you're leading a team or an organization right now, here's what this means in practice:

The AI transition is happening whether you're ready or not. The question isn't whether to engage with it; it's how quickly you can build the organizational capability to absorb and deploy it effectively.

That means investing even more in your people than in your tools. It means building governance that enables responsible experimentation without creating unintended bottlenecks. It means having the strategy clarity to know which AI investments connect to real business outcomes.

Most of all, it means not waiting for certainty before moving. Because the organizations waiting for clarity on how this all plays out are going to be watching the ones that figured it out while they were deliberating.

The abundance scenario is real. Getting there requires real work.

I'm in the middle of that work; it's not pretty or easy, but it's not a doom loop either.

It's just transformation. Which is hard, and slow, and then suddenly fast.

Originally published on Medium

Originally published on LinkedIn

← Back to Signal