The Intelligence Abundance Framework shows you what to build. This framework shows you how to get there, without breaking what already works.
The problem: Every AI transformation I’ve observed fails for the same reason: organizations treat it like a technology implementation. It isn’t. It’s organizational reconstruction.
The foundation: AI transformation must be rooted in your culture, core values, and vision. Not bolted on. The organizations that get this right ask “how does AI serve what we’re already building?”
The pathway: Four phases built on proven change management science, grounded in your organizational identity, designed for organizations that need to move now.
Jump to Phase
The Full Framework
The Missing Half
The Intelligence Abundance Framework describes what organizations optimized for AI look like: diamond structures, five-person squads (small, permanent teams with full ownership of their domain) and new collaboration models. This framework describes how to get there.
Every AI transformation I've observed fails for the same reason: organizations treat it like a technology implementation. It isn't. It's organizational reconstruction. The technology is the easy part. The hard part is changing how people work together when intelligence becomes abundant.
And there's a prerequisite most organizations skip entirely: AI transformation must be rooted in your culture, core values, and vision. Not bolted onto them. Not running parallel. Rooted in them. An AI strategy disconnected from who you are as an organization produces technically impressive work that feels wrong: misaligned with your brand, your customer relationships, your people. The organizations that get this right don't ask "how do we adopt AI?" They ask "how does AI serve what we're already building?"
What follows is a four-phase pathway built on proven change management science and direct experience inside organizations making this transition. The phases mirror Lencioni's organizational health disciplines, which is not a coincidence. The organizational health work and the AI transformation work are the same work, described in different vocabulary.
Phase 1: Preparation
↑ Back to topWhy must we change, and what does change require?
Get Leadership on the Same Page
Before anything else, your senior team needs a shared understanding of what intelligence abundance actually means for your specific organization. Not a vision statement. Not a strategy deck. Operational clarity.
This means agreeing on three things: which work stays human-only, which gets AI-augmented, and which becomes AI-primary. If your leadership team can't answer those questions with specificity, you're not ready for Phase 2.
The most common failure I see is leadership teams that agree on "AI adoption" in principle but hold fundamentally different assumptions about what that means in practice. Those differences don't surface until implementation, when they explode as conflicting priorities and undermined initiatives.
The Leader's Job in Intelligence Abundance
There's a question that sits behind every AI transformation and rarely gets asked out loud: what does the senior leader actually do when the organization runs on squads, constitutional principles, and distributed intelligence?
The honest answer is uncomfortable: less of what made you successful.
In intelligence-scarce organizations, leaders create value by aggregating information, making decisions others can't, and directing execution. Intelligence abundance eliminates all three. Information is universally accessible. AI augments anyone's decision-making capacity. Execution is distributed to autonomous squads.
What remains is the work that can't be automated or delegated:
Constitutional stewardship. Someone has to hold the line on what the organization stands for when speed and capability make it easy to drift. This isn't writing the constitution (squads can contribute to that). It's being the person who says “this is technically excellent and culturally wrong” when an AI-augmented team produces output that violates organizational identity. That judgment requires deep institutional knowledge and the authority to enforce it.
Tension navigation. Autonomous squads will develop different interpretations of shared principles. Two squads will both believe they're acting within the constitution and produce conflicting approaches. The leader's job is not to pick a winner but to use the conflict to sharpen the principles, making the implicit explicit. Every constitutional conflict resolved well makes the next hundred autonomous decisions better.
Environment creation. The leader's primary output shifts from decisions to conditions. The question changes from “what should we do?” to “what environment would produce the best decisions from the people closest to the work?” This includes protecting psychological safety, ensuring squads have what they need, and removing organizational friction that impedes autonomous operation.
Horizon scanning. With execution distributed, leadership attention should shift to longer time horizons. What's changing in the competitive landscape? What AI capabilities are emerging that could reshape squad operations? What adjacent opportunities become reachable with current capabilities? This is strategic work that requires freedom from operational detail, which intelligence abundance finally provides.
The hardest transition for most leaders: your value is no longer measured by the quality of your decisions. It's measured by the quality of the decisions you never had to make because the system you built made them well without you.
Build Psychological Safety Before You Build Anything Else
Here's what makes AI transformation different from every other technology change: in human-only teams, individual errors stay contained. With AI amplification, one person's poor judgment gets multiplied across systems and outputs at machine speed.
Psychological safety isn't a nice-to-have. It's infrastructure.
Your people need to be willing to say "I don't know how to direct the AI to do this" without career consequences. They need to flag errors as system improvements, not personal failures. They need to challenge outputs (including their own) without political cost.
Test this before you deploy AI tools. If people won't admit uncertainty in their current work, they definitely won't admit it when AI amplifies the stakes.
Face the Capability Gap Honestly
Prosci's research is clear: 38% of AI adoption failures come from user proficiency issues. Only 16% come from technical problems. The technology works. The humans struggle.
Assess your organization for collaboration effectiveness, comfort with ambiguity, ability to write clear specifications, experience with iterative processes, and tolerance for continuous learning. These are the capabilities that determine AI transformation success, not technical skills.
Expect the Emotional Resistance: It's Not Irrational
The five failure patterns later in this framework describe how organizations break. This section describes how people break.
AI transformation triggers something deeper than change fatigue. It triggers identity threat. When you tell a senior analyst that AI can do 70% of their analytical work, you're not describing a productivity improvement. You're telling them that 70% of what they've spent a career becoming expert at is no longer exclusively human territory.
The resistance follows a predictable arc:
Dismissal. “AI can't do what I do.” This isn't ignorance. It's a reasonable first response from someone whose expertise is genuine. The work they do is hard. The mistake is assuming that hard-for-humans means hard-for-AI. Address this by demonstrating capability on their actual work, not generic demos. Generic demos let dismissal persist because the gap between the demo and their real work feels enormous.
Anxiety. “What happens to me?” Once capability is demonstrated, the existential question arrives. This is where most organizations fumble. They either minimize the concern (“your job is safe”) or catastrophize (“everything will change”). Both are dishonest. The honest answer: your role will change substantially. Your expertise becomes more valuable, not less, because you're now directing AI systems within a domain you understand deeply. But the day-to-day will look different, and that transition will be uncomfortable.
Grief. Often missed entirely. People mourn the loss of craft: the satisfaction of doing skilled work with their own hands and minds. A financial analyst who spent years developing Excel modeling expertise doesn't just lose a skill when AI takes over modeling. They lose a source of professional identity and daily satisfaction. Acknowledging this grief is not weakness. It's realistic change management. Skip it and you get compliance without commitment.
Negotiation. “I'll use AI for the simple stuff but keep doing the complex work myself.” This is productive if managed well and destructive if it becomes permanent. It's a healthy transitional stage: people need to build trust with AI systems incrementally. But it can calcify into a ceiling where people never let AI into their most important work, which is exactly where the highest leverage exists.
Integration. The person redefines their professional identity around human-AI collaboration rather than solo expertise. This is the target state, and it can't be rushed. It requires genuine experience of AI augmenting their best work, not replacing their routine work.
What this means for implementation: Your timeline must account for this arc. A 90-day pilot is enough for people to reach negotiation. Integration takes six months to a year. Organizations that declare transformation complete after the pilot are measuring tool adoption, not actual capability development.
The seniority paradox: Your most experienced people (the ones whose judgment you need most in an AI-augmented world) will often resist longest. They have the most identity invested in the old way of working. Don't mistake their resistance for inability. It's proportional to what they're being asked to let go of.
Write the Constitution
Before anyone touches an AI system, establish the principles that will govern how both humans and AI operate throughout the transformation.
This constitution isn't a compliance exercise. It's where your culture, core values, and vision become operational. Quality standards should reflect what your organization already believes about excellence. Ethical boundaries should extend your existing values into AI-augmented work. Decision rights should align with how your culture distributes trust and accountability.
The organizations that treat AI governance as a separate workstream from their cultural identity end up with two competing operating systems. The ones that treat AI governance as an *expression* of their cultural identity end up with coherent, fast-moving teams that make decisions autonomously because the principles are already internalized.
Every autonomous decision a squad makes will reference this constitution. If it doesn't sound like your organization, it won't stick.
Anatomy of an AI Operating Constitution
Telling organizations they need a constitution is easy. Telling them what goes in it is where most frameworks stop. Here's what a functional AI operating constitution actually contains:
Identity anchor. Who are we and what do we refuse to compromise? This isn't your mission statement. It's the operational translation. If your organization values craftsmanship, the constitution specifies that AI outputs must meet the same quality bar as human-produced work, that “faster” never justifies “worse,” and that the person reviewing output is accountable for its quality as if they'd produced it themselves. The identity anchor is the shortest section and the most important one.
Decision rights matrix. Which decisions can squads make autonomously, which require cross-squad coordination, and which require leadership input? Be specific. “Squads can approve customer-facing content that follows brand guidelines without escalation. Content that introduces new messaging, makes claims about capabilities, or references competitors requires cross-squad review.” Vague decision rights produce either paralysis (everything escalated) or chaos (nothing escalated).
Quality standards by work type. Define what “good enough” means for different categories. AI makes it trivially easy to produce more. The constitution must define when more is actually better. Internal analysis has different standards than client deliverables. First drafts have different standards than final outputs. Make this explicit or every squad will calibrate independently, and your organizational quality becomes incoherent.
Ethical boundaries. Where does your organization draw lines on AI use? Not theoretical ethics. Operational ones. Can AI generate first drafts of sensitive communications? Can AI analyze employee performance data? Can AI make recommendations about personnel decisions? Can AI interact directly with customers without human review? These questions have different answers for different organizations, and the constitution is where those answers live.
AI attribution and transparency. When do stakeholders need to know AI was involved? Always? Only for certain work products? Never? Whatever the answer, make it consistent. Organizations that handle this ad hoc end up with trust problems when stakeholders discover AI involvement they weren't told about.
Failure protocols. What happens when AI produces harmful or incorrect output that reaches a stakeholder? Who is accountable? What's the remediation process? How does the organization learn from the failure? Define this before it happens, not after. Post-incident constitutional development is reactive and usually overcompensates.
Evolution mechanism. The constitution must include its own update process. Who can propose changes? How are they evaluated? How quickly can they be implemented? AI capabilities evolve fast. A constitution that can't be updated quarterly is a constitution that becomes irrelevant. But changes must be deliberate, not reactive. Include a cooling-off period: proposed changes are shared organization-wide for review before adoption.
Anti-patterns to avoid:
The compliance document. If the constitution reads like a legal brief, no one will internalize it. It should read like a clear articulation of how your organization thinks about AI-augmented work.
The aspirational document. “We will use AI responsibly and ethically” is not a constitutional principle. It's a sentiment. Principles must be specific enough to guide real decisions.
The static document. A constitution written in 2026 and unchanged in 2027 is a failed constitution. Build the expectation of evolution from the start.
The copied document. Your constitution cannot be borrowed from another organization. The entire point is that it expresses your culture, values, and identity. Use others as references, but write your own.
Phase 2: Engagement
↑ Back to topWhat exactly are we building, and how do we build it?
Redefine Roles Around Collaboration, Not Tasks
Traditional job descriptions die when AI can perform most traditional responsibilities. New roles focus on how humans work with AI, not what tasks they execute.
Every squad needs five capabilities (not necessarily five separate people): someone who maintains alignment with organizational principles, someone with deep domain expertise, someone who can effectively direct AI systems, someone focused on quality verification, and someone who coordinates with other teams.
One person can hold multiple roles. The point is ensuring all capabilities are present, not that five people each do one thing.
Make the Squad the Unit of Change
This is the single most important decision in your transformation: don't train individuals to use AI tools. Train squads to collaborate with AI as a unit.
Individual AI adoption creates productivity islands. Squad-level adoption creates productivity systems. The difference determines whether AI produces leverage or waste.
Form squads around clear domain focus, compatible working styles, complementary expertise, and commitment to learning. Cap them at five people, not because it's a nice round number, but because cognitive science shows coordination overhead starts consuming productivity gains beyond five.
Choose Your Collaboration Models
Different work needs different human-AI patterns. Map your portfolio:
Human-in-the-Loop for high-stakes decisions requiring verification. Tiered Review for process work with clear exception patterns. Centaur for strategic work requiring iterative human-AI collaboration. Cyborg for creative work that benefits from fluid interaction.
Most organizations need all four. The question isn't which model to choose. It's which work gets which model.
Accept That This Never Ends
Unlike traditional technology implementations with defined endpoints, AI capabilities evolve continuously. Your transformation framework must support ongoing adaptation, not deliver a one-time change.
This is the "never-ending Phase 2" reality. Your constitutional framework, your collaboration models, your squad configurations: all will need continuous refinement as AI capabilities advance. Build for adaptation, not for completion.
Phase 3: Implementation
↑ Back to topHow do we execute without breaking what works?
Start With One Squad
Start with one squad. Get it right. Then scale the patterns that work.
Pick something with high visibility but contained scope. Success needs to be obvious, but failure can't be catastrophic. Choose willing participants with enough domain expertise to evaluate AI output quality, and give them leadership air cover to experiment.
Minimum 90 days. AI collaboration skills develop through practice, not training programs. Anything shorter doesn't allow real capability development.
Develop Skills Through Work, Not Classrooms
The capabilities required for AI collaboration can't be taught in a training room. They develop by doing real work with AI assistance.
Four skill areas matter: writing clear specifications for AI systems, rapidly evaluating AI output quality, iterating effectively through multiple refinement cycles, and knowing when to accept AI output versus when to start over.
All four develop through practice on actual projects. The work provides context for skill development. Skill development improves work quality. It's a virtuous cycle, but only if you let people learn on real work rather than sandbox exercises.
Generate Wins That Matter
AI transformation needs evidence. Generate wins across four categories:
Velocity: tasks that take dramatically less time. Quality: outputs that exceed previous standards. Capacity: work you couldn't do before. Innovation: solutions that emerge from AI-augmented thinking.
Focus on capability expansion, not just efficiency. "We did the same thing 40% faster" is less compelling than "we did something we couldn't do before."
Iterate, Don't Optimize
Squads improve through cycles: plan the work, execute with AI, review what happened, adapt the approach, integrate what worked. Repeat.
The critical mistake is optimizing too early. If you start measuring efficiency before you've figured out effective collaboration patterns, you'll lock in mediocre approaches. Let squads experiment. Measure learning, not just output.
The Valley Between Two Operating Models
There is a point in every AI transformation (usually months three through five of implementation) where the old operating model is visibly broken and the new one isn't working yet. Kotter calls this the urgency maintenance problem. Practitioners call it the valley of despair. Whatever you call it, plan for it, because it will happen.
Here's what the messy middle looks like:
Pilot squads are producing inconsistent results. Some outputs are genuinely impressive. Others are worse than what the old process delivered. The inconsistency is demoralizing because people expected a clean upward trajectory and got a jagged one.
The rest of the organization is watching the pilots with a mixture of curiosity and schadenfreude. Every pilot failure gets amplified. Every success gets qualified with “but that wouldn't work for our team.” The narrative that AI transformation is overhyped starts gaining traction.
Leaders who championed the transformation feel exposed. They made promises about outcomes that haven't materialized yet. The temptation to pull back, slow down, or pivot to “incremental AI adoption” becomes intense.
How to survive it:
Inoculate early. Tell everyone (leadership, pilot squads, the broader organization) that the valley is coming. Describe it specifically. “Around month three or four, results will be inconsistent and it will feel like this isn't working. That's the learning curve, not evidence of failure.” When the valley arrives on schedule, it confirms your credibility instead of undermining it.
Measure learning, not output. During the messy middle, output metrics are misleading. A squad producing inconsistent results but developing new collaboration patterns is on track. A squad producing steady results by defaulting to old methods with AI as a thin veneer is not. Track capability development: Are squad members improving their ability to direct AI systems? Are they developing quality intuition? Are they discovering new collaboration patterns? These leading indicators predict future performance better than current output metrics.
Protect the pilot squads. The messy middle is when organizational antibodies attack hardest. People who feel threatened by the transformation will use pilot inconsistency as ammunition. Leaders must explicitly shield pilot squads from premature judgment while maintaining accountability for learning and adaptation.
Shorten feedback loops. Weekly retrospectives during the messy middle, not monthly. The faster squads can identify what's not working and adjust, the shorter the valley becomes. This is also when squad psychological safety gets tested for real. People need to be able to say “this approach isn't working” without it being heard as “the whole initiative is failing.”
Celebrate the ugly wins. The messy middle produces a specific kind of success: things that work but aren't pretty. A squad that discovers a new collaboration pattern through three failed attempts has achieved something valuable, even if the final output is merely adequate. Recognize the learning, not just the result.
Hold the line on constitutional adherence. The temptation during the messy middle is to relax standards to generate better-looking results faster. This is exactly wrong. The constitution is what ensures the new operating model produces coherent, aligned output. Relaxing it produces short-term numbers and long-term fragmentation.
What to Measure When Nothing Is Stable Yet
The intelligence abundance metrics (cognitive leverage ratio, decision velocity, expertise democratization) describe the target state. They're useless during early transformation because the systems producing those outcomes don't exist yet. You need a different measurement framework for the journey:
Phase 1 (Preparation): Readiness metrics: Can your senior team independently articulate the same transformation rationale and expected outcomes? Test this by asking them separately. Disagreement here predicts implementation conflict. Measure your psychological safety baseline using Edmondson's framework or equivalent. You need a pre-transformation measurement so you can track whether transformation is strengthening or eroding trust. Assess what percentage of your workforce has basic AI collaboration skills: the ability to specify, evaluate, iterate, and integrate.
Phase 2 (Engagement): Clarity metrics: Can team members use the constitution to make an autonomous decision? Give them a scenario and see if they reach consistent conclusions independently. If they can't, the constitution isn't specific enough. Can every person in the pilot describe their squad role and how it differs from their pre-transformation role? Confusion here means engagement work isn't done.
Phase 3 (Implementation): Learning metrics: Are squads producing better outputs on the third attempt than the first? Track improvement rate, not absolute quality. Are squad members catching AI errors before they reach external stakeholders? This improves with practice. Track the curve. Are squads developing new ways of working with AI, or using the same approach for every task? Novelty in approach indicates real capability development. How often do squads escalate to leadership? This should decrease over time as constitutional interpretation matures.
Phase 4 (Reinforcement): Integration metrics: Now your target-state metrics become meaningful. Cognitive leverage ratio, decision velocity, expertise democratization: all measurable once squads are operating stably. Watch for cultural indicators: are new hires expected to have AI collaboration skills? When this shifts from “nice to have” to “required,” integration is real. Are non-pilot teams adopting squad practices without being mandated to? Organic adoption is the strongest signal that the transformation is self-sustaining.
The anti-metric: Avoid measuring AI tool usage rates. High usage doesn't indicate effective collaboration. Some of the worst AI adoption looks great on usage dashboards. People generating volumes of mediocre output because the tool is easy to use, not because they're using it well.
Phase 4: Reinforcement
↑ Back to topHow do we make this permanent and expandable?
Integrate Into Culture, Not Just Process
The new ways of working must become natural, not mandated. This happens through demonstrated success, not executive decree.
Update hiring to select for AI collaboration readiness. Update performance evaluation to reward effective human-AI work. Create career paths that recognize AI orchestration as real expertise. Make strategic planning assume intelligence abundance capabilities.
This takes years, not months. Plan accordingly.
Build Squad Networks
Successful squads don't operate alone. They coordinate through networks:
Horizontal: squads with similar functions share techniques. Vertical: squad representatives join strategic decisions. Learning: cross-functional groups advance AI collaboration skills. Innovation: experimental squads explore emerging capabilities.
The network architecture prevents squad insularity while preserving the autonomy that makes them effective.
When Squads Collide
Autonomous squads will disagree. Not might. They will. If they don't, either the constitution is so restrictive that autonomy is illusory, or squads have siloed so completely that their work never intersects. Both are failure modes.
Productive conflict between squads is a sign that autonomous operation is working and that the constitutional framework needs refinement. The question is whether you have a resolution mechanism or whether conflicts escalate to leadership by default, which defeats the purpose of distributed authority.
Tier 1: Squad-to-Squad. First attempt at resolution happens directly between the squads involved. They reference the constitution, present their interpretations, and try to find alignment. Most conflicts resolve here if squads share genuine constitutional commitment and sufficient psychological safety to admit “our interpretation might be wrong.” Set a time limit: 48 hours for non-urgent conflicts, same-day for anything affecting external stakeholders. Without a time limit, tier 1 becomes conflict avoidance disguised as deliberation.
Tier 2: Network Arbitration. If squad-to-squad resolution fails, the conflict goes to the relevant squad network (a peer group of squad coordinators from the same functional area). from the same functional area. The network reviews both interpretations against the constitution and either identifies the stronger interpretation or determines that the constitution itself is ambiguous. This tier exists because peer judgment has legitimacy that hierarchical judgment lacks. A squad is more likely to accept “your peers reviewed your approach and found a gap” than “leadership overruled you.”
Tier 3: Constitutional Escalation. If the conflict reveals genuine constitutional ambiguity (a situation the principles don't clearly address) it escalates to leadership. But the escalation isn't “make this decision for us.” It's “the constitution doesn't cover this case; we need a new principle or a refined existing one.” This is valuable. Every tier 3 escalation improves the constitution. The goal is not zero escalations. The goal is escalations that produce better principles rather than one-off rulings.
What to watch for: Chronic tier 1 failures between the same squads usually signal a people problem, not a constitutional problem. Tier 2 consistently splitting along functional lines means the network is becoming tribal rather than principled. Rotate network composition. Tier 3 escalations that produce rulings instead of principles mean you're rebuilding a hierarchy with extra steps. And zero escalations means either squads are avoiding conflict or never intersecting. Both are bad. Healthy autonomous organizations produce regular, productive constitutional friction.
Scale Through Patterns, Not Mandates
Expansion happens by replicating what works, not by centralizing control.
Successful squad models get adapted for similar functions. Mature squads take on coordination responsibilities. Network effects spread innovation organically. The key constraint: maintain squad autonomy. Excessive centralization destroys the agility that makes this work.
Keep Adapting
AI capabilities will keep evolving. Your organization must evolve with them.
Regular assessment of new AI capabilities and their implications. Experimentation with emerging collaboration patterns. Constitutional updates based on organizational learning. Network-based sharing of innovations across squads.
This is not a transformation with a finish line. It's a new way of operating. The organizations that treat it as ongoing development will continuously outperform those waiting for the "final" implementation.
Where AI Transformations Die
↑ Back to topFive failure patterns kill more AI transformations than any technical limitation:
The Tool Fixation Trap. Organizations obsess over selecting AI tools rather than developing collaboration capabilities. The tools are commodities. The capability to use them effectively is the differentiator. Start with desired outcomes, then choose tools that support them.
The Individual Adoption Fallacy. Training individuals to use AI tools instead of developing team collaboration. Individual adoption creates productivity islands. Team adoption creates productivity systems. Make the squad the unit, not the person.
The Efficiency Obsession. Optimizing for speed and cost before establishing effective collaboration patterns. Premature optimization prevents the experimentation necessary for real capability development. Focus on effectiveness first. Efficiency follows.
The Safety Theater Problem. Elaborate AI governance processes that create oversight illusion without substance. Complex approval chains slow adoption without improving outcomes. Real safety comes from human capability, not bureaucratic process.
The Cultural Underestimation. Treating AI transformation as technology deployment. It's cultural change. AI that isn't grounded in your organization's values, vision, and identity will produce output that's technically competent and culturally incoherent. Invest as much in aligning AI with who you are as you invest in the technology itself. Probably more.
How This Connects
↑ Back to topThis transformation framework implements the design principles from the Intelligence Abundance Framework:
The diamond structure emerges through role redefinition and squad formation. Collaboration models are selected and refined through practice. Constitutional governance gets established early and operationalized throughout. The squad architecture is the primary implementation mechanism. Psychological safety is the foundation everything else builds on.
The change management work and the organizational design work are the same work, described in different vocabulary.
Timeline
↑ Back to topPreparation: 4–8 weeks. Leadership alignment, psychological safety, constitutional foundations. If your leadership team can't get aligned in two months, the timeline isn't the problem.
Engagement: 2–4 weeks. Role architecture, squad formation, collaboration model mapping. This is definition work, not committee work.
Implementation: 3–6 months. Pilot squads, skill development, iterative refinement, wins generation. Long enough for real capability development, short enough to maintain urgency.
Reinforcement: Ongoing. Cultural integration, scaling, continuous adaptation. This never ends, and that's the point.
Total: 6–9 months to operational intelligence abundance, with continuous adaptation after that.
The Bottom Line
AI transformation fails when organizations treat it as technology implementation. It succeeds when they approach it as organizational reconstruction.
Get the human elements right, and the technology follows. Get them wrong, and no amount of AI sophistication will save you.
The framework exists. The technology is available. The pathway is clear. The only remaining question is whether leadership will commit to the changes required to leverage them.
This is not about the future. Organizations are making this transition now. The choice is whether to do it systematically or accidentally.