Framework 3 · Intelligence Abundance Toolkit

The Leadership Alignment Audit

Three executives agree they're "all in on AI" while holding three completely different assumptions about what that means. Nobody surfaces the disagreement. Three months later, those assumptions collide and the initiative stalls.

The Failure Mode This Addresses

What goes wrong

Leadership teams that appear aligned while holding completely different assumptions. Everyone calls it change resistance. It isn't. It's misalignment that was never diagnosed. It was diagnosable before a single tool deployed.

The Three Questions

Before any tool deploys, every leader must answer these three questions specifically, not in the abstract. The goal isn't consensus. The goal is to surface where the assumptions differ. Disagreement is data.

Question 1: What Work Stays Human-Only?

Not "strategic thinking" or "relationship management." Name actual tasks, decisions, or outputs that your organization has decided will not be AI-augmented. What is the principle driving that decision?

Question 2: What Work Gets AI-Augmented?

Humans remain in the loop, but AI assists. At what point in the workflow does AI enter? Who decides whether AI output is good enough to use?

Question 3: What Work Becomes AI-Primary?

AI handles it. Humans review outputs rather than producing them. What verification does the human perform? What is the escalation path when AI output is wrong?

The Practitioner Note

Don't skip this session because leadership seems aligned. Seeming aligned and being aligned are different things. The most dangerous alignment failures are the ones where everyone nodded without realizing they'd imagined different things. If you can't get leadership in a room for 90 minutes before launching a transformation initiative, that's the most important data point you'll collect.

Three Questions Before the Session

Use these to diagnose alignment risk before you run the full session:

  1. Can every member of your leadership team name three specific tasks that will remain human-only? Do their answers match?
  2. If you put all leadership answers to "what gets AI-augmented" on a whiteboard, would the list be consistent or contradictory?
  3. Who in your organization decides whether AI output is good enough to use? Is that decision explicit, or assumed?

The Full Practitioner Tool

The Intelligence Abundance Toolkit includes the 90-Minute Leadership Alignment Session Guide: pre-work instructions for distributing to leaders one week before the session, a facilitation script with timing, a gap-mapping format for the whiteboard exercise, and an action item template for the decisions that must be made before implementation begins.

The Intelligence Abundance Toolkit

Seven frameworks. Seven worksheets. 90-min session guide, 12-month roadmap, and 11 AI prompts. $97.

Get the Toolkit →