What AI Can't Fix (Because You Broke It First)

Deloitte's TrustID Index found that trust in company-provided AI tools fell 31% between May and July 2025. Trust in agentic AI systems dropped 89% in the same period. Organizations are diagnosing this as an AI problem. It isn't. It's a trust problem that existed long before the first model was deployed.

1. The Debt Preceded the Deployment: BetterUp's survey of more than 200,000 US workers found employees' comfort raising concerns to leadership declined quarter over quarter since 2020, years before AI arrived at scale. Less than half of employees trust their senior leaders, per a 2025 Gartner survey. You can't borrow against trust you haven't built.

2. Employees Don't Trust AI Because They Don't Trust Leaders: HBR research makes the connection explicit: employee trust in AI reflects broader confidence in leadership. Accenture found 95% of employees value working with AI, but don't trust organizational leaders to implement it thoughtfully. When leaders haven't earned the benefit of the doubt, AI becomes another source of suspicion, not a tool for empowerment.

3. The Transparency Asymmetry: Organizations ask employees to be open about AI usage, workflow gaps, and capability deficits, while remaining opaque about how AI will affect roles and futures. Stanford's 2025 Foundation Model Transparency Index found AI companies average just 40 out of 100 on transparency, down from 58 in 2024. The ask for openness flows in one direction. Employees notice.

4. The Cost of Unacknowledged Fear: The Edelman Trust Barometer found 61% of employees worry technology will make their jobs obsolete, yet only 34% say their employer is doing enough to prepare them. When leaders don't acknowledge the fear, employees don't disappear it. They route it underground into resistance and quiet non-compliance with every AI initiative that follows.

5. Trust Has a Financial Floor: BetterUp research found a 10% decline in trust can cut financial performance by as much as $115 million for a $500 million company over four years. Trust is not a cultural nicety. It is load-bearing infrastructure, and most organizations are attempting AI transformation on a foundation they've been quietly undermining for years.

AI doesn't create trust deficits. It makes existing ones impossible to ignore. The organizations that succeed aren't the ones with the best models. They're the ones that did the harder, quieter work of earning trust before they needed to spend it.

What to Watch: "How to Build (and Rebuild) Trust" by Frances Frei is the clearest framework for understanding why trust breaks and what restoring it actually requires.

What's your take: Is your organization building AI adoption on top of a trust foundation that was already cracked?

Originally published on LinkedIn

← Back to Signal