Framework 3 · Intelligence Abundance Toolkit
Three executives agree they're "all in on AI" while holding three completely different assumptions about what that means. Nobody surfaces the disagreement. Three months later, those assumptions collide and the initiative stalls.
What goes wrong
Leadership teams that appear aligned while holding completely different assumptions. Everyone calls it change resistance. It isn't. It's misalignment that was never diagnosed. It was diagnosable before a single tool deployed.
Before any tool deploys, every leader must answer these three questions specifically, not in the abstract. The goal isn't consensus. The goal is to surface where the assumptions differ. Disagreement is data.
Not "strategic thinking" or "relationship management." Name actual tasks, decisions, or outputs that your organization has decided will not be AI-augmented. What is the principle driving that decision?
Humans remain in the loop, but AI assists. At what point in the workflow does AI enter? Who decides whether AI output is good enough to use?
AI handles it. Humans review outputs rather than producing them. What verification does the human perform? What is the escalation path when AI output is wrong?
The Intelligence Abundance Toolkit includes the 90-Minute Leadership Alignment Session Guide: pre-work instructions for distributing to leaders one week before the session, a facilitation script with timing, a gap-mapping format for the whiteboard exercise, and an action item template for the decisions that must be made before implementation begins.
Seven frameworks. Seven worksheets. 90-min session guide, 12-month roadmap, and 11 AI prompts. $97.