AI That Sticks

Goldman Sachs reported last month that AI contributed basically zero to U.S. GDP growth in 2025. That's not a rounding error. It's basically zero.

Hundreds of billions of dollars have been deployed. Every major enterprise has an AI initiative. Every technology vendor has a product story. And the needle barely moved.

I spend my days inside the organizations trying to figure out why. I work in the actual friction of implementation: the team that quietly stopped using the tool, the workflow rebuilt from scratch six months after launch, the steering committee producing governance documents instead of working software.

What I keep seeing is a consistent misdiagnosis.

The Design Is Backwards

When transformations stall, the conversation almost always goes the same direction. The model was wrong. The vendor overpromised. The implementation took too long. These things happen, but they're almost never the root cause.

The root cause is sequence. Most organizations design the AI system first and try to bring the culture along second. By the time they get to the culture, people have already decided how they feel about it.

Prosci put the investment imbalance plainly: 75% of AI spend goes to technology and 25% to people. The organizations that actually get transformation to hold tend to run that ratio much closer to equal, and some flip it entirely.

Culture isn't the last mile of an AI transformation, it's the first, yet we keep pouring that foundation last.

I've watched two organizations work through this with nearly identical technology stacks and budgets. One treated AI adoption as an operational priority: clear ownership, real decision rights, and a tolerance for imperfect first iterations. The other routed every AI initiative through a steering committee that met monthly.

Six months later, the first had three production workflows running and was learning fast. The second was still finalizing its strategy document.

The difference was never the technology. It was the organizational muscle to act under uncertainty.

The Question I Ask Before Anything Else

Before we talk about platforms or models or timelines, I've started asking one question: does your organization have the emotional infrastructure to use any AI model well?

That question sounds soft. It isn't.

It means: do people have enough psychological safety to admit when something isn't working? Do managers know how to coach through a transition instead of just announcing one? Does leadership have the credibility to hold the change when the short-term costs show up?

Those aren't cultural nice-to-haves. They're the variables that determine whether the technology investment pays off or gets quietly abandoned.

The Second Gap: Culture-Aligned Deployment

The emotional infrastructure question addresses whether an organization is ready to change. But there's a second question most skip entirely: is the AI they're deploying actually shaped around who they are?

Most organizations treat AI deployment as a configuration exercise. They implement what the vendor provides, adjust the settings, and try to get their people to adopt the result. What they end up with is generic AI that works the way every other organization's AI works, because it was built without any of their specific culture baked in.

The organizations that get transformation to stick make a different set of decisions from the start. They treat how the AI thinks, what it prioritizes, and how it communicates as design decisions, not implementation details. Those decisions reflect actual organizational values, not borrowed ones.

This is harder than it sounds. It requires organizations to be explicit about things they usually leave implicit: how they actually make decisions, what they consider good judgment, what language carries trust. Most organizations have never had to articulate those things. AI deployment forces the question.

The organizations that do that work end up with AI that people recognize as being theirs. The ones that skip it end up with AI that feels like a tool bolted on from the outside. That distinction is the difference between adoption and compliance.

What the Ones That Stick Actually Have in Common

The organizations I've seen actually transform with AI don't start from "what can this do for us?" They start from a different question: who are we, and how does AI express that?

That reframe changes everything. When the AI you're deploying reflects how your organization actually thinks, the language it uses, the judgment calls it makes, and what it treats as important, people recognize themselves in it. Adoption isn't a campaign, it's just the way the work gets done now.

I'm running this experiment inside my own organization right now. I'm not advising clients from the outside. I'm building it at Artisan Studios, with my own team as the test case. The lessons are different when you're not the consultant. You find out quickly what was good advice and what was just plausible-sounding.

The most important thing I've learned so far: AI that sticks isn't technology that works. It's technology that fits. Fit doesn't come externally. It comes from an organization's willingness to understand itself well enough to build something that reflects it. That's a culture question, and most organizations haven't asked it yet.

Originally published on Medium

Originally published on LinkedIn

← Back to Signal