The organizational structures we built for a world where intelligence was scarce do not function in a world where intelligence is abundant. That is not a prediction. It is a description of what is already happening.
For the entirety of organizational history, intelligence has been a scarce resource. Specialized knowledge was expensive to develop, took years to acquire, and was difficult to scale. This scarcity shaped everything about how organizations were built. Pyramids exist because knowledge was rare and needed to be concentrated at the top; hierarchies exist because information had to flow through gatekeepers; management layers exist because the people who knew things needed to supervise the people who did things. Every organizational structure you have ever worked in was designed around the assumption that expertise is limited, expensive, and needs to be carefully allocated.
That assumption has expired.
AI has initiated a shift from intelligence scarcity to intelligence abundance. This is not a metaphor or a prediction about where things are heading. It is a description of the current state: specialized knowledge that once required years to acquire is now accessible through AI interfaces; analytical capabilities that once required teams of senior specialists can be accessed by a single mid-career professional with the right tools; cognitive leverage that was previously unimaginable is now routine. The evolution through four AI paradigms, from rule-based systems in the 1970s through machine learning, deep learning, and now large language models, has fundamentally transformed the economics of organizational intelligence.
What Abundance Changes
Intelligence abundance transforms organizations through four mechanisms, each of which has structural implications that most leaders have not yet confronted.
Democratization of expertise. Expertise is being democratized. Specialized knowledge that once required years of study and commanded premium compensation is becoming widely accessible. The barrier to utilizing expertise is shifting from 'years of study' to 'ability to effectively direct AI systems,' which is a vastly lower threshold. This does not eliminate the value of deep expertise; it changes where that value sits. The expert's role shifts from possessing knowledge to verifying and directing the application of knowledge that AI makes available to everyone.
Productivity multiplication. AI acts as a universal productivity multiplier across virtually all knowledge work. Early adopters of AI coding assistants have reported productivity gains of thirty to forty percent, with some specialized tasks seeing improvements of three hundred percent or more. But the word 'productivity' obscures what is actually happening. It is not that people are doing the same work faster. It is that the nature of what one person can accomplish has changed categorically.
Cognitive leverage. This is the most significant impact, and the one that connects directly to organizational design. One person with AI can now accomplish what previously required a team. Research that took months can be completed in days. Individual creators can produce at the scale of studios. Startups can operate with the capabilities of much larger organizations. This cognitive leverage does not just change headcount math; it changes what team structures are possible and what coordination costs are tolerable.
Organizational transformation. Traditional hierarchies were built around knowledge scarcity: the people who knew things sat above the people who did things, and the hierarchy existed to distribute scarce expertise. When expertise becomes abundant, the hierarchy loses its economic rationale. New organizational models are emerging that distribute decision-making authority, flatten coordination structures, and operate through shared standards rather than management oversight.
The Organizational Shapes of Abundance
The traditional pyramid, wide at the base with many entry-level executors, narrowing through management layers to a small leadership team at the top, was the structural expression of intelligence scarcity. AI is dissolving the base of that pyramid. Entry-level execution work is being automated or dramatically augmented, which means the large workforce that once occupied the bottom of the hierarchy is no longer necessary at the same scale.
What emerges is a diamond. The bottom narrows as AI handles execution-layer work. The middle expands as experienced professionals direct AI agents, verify output, and make judgment calls that require human context. The top remains small; a tight leadership team that maintains the organization's standard of correctness. This diamond shape is not a theoretical projection. PwC, Gartner, and Deloitte have all independently described versions of it, and organizations across industries are already experiencing the compression of their base and the expansion of their middle.
But the diamond, as most people describe it, is incomplete. It tells you where the people sit. It does not tell you how they coordinate. Saying 'the middle expands' without specifying how those mid-level professionals are organized is like saying 'the orchestra gets bigger' without specifying whether they are playing the same piece. The diamond needs internal architecture, and that architecture needs to be grounded in what we know about human coordination limits.
The research on this is convergent and decades old. Robin Dunbar's work on primate neocortex size established that the human brain can sustain deep, high-context coordination with approximately five people, with effectiveness peaking again at layers of fifteen, fifty, and one hundred fifty. Fred Brooks demonstrated in 1975 that adding people to a software project increases communication overhead faster than it increases capacity. Military organizational research has confirmed that the ideal design team is five to six people, effective up to nine, and inefficient at twenty. These are not management opinions; they are biological constraints that AI did not change, because AI did not rewire the human brain.
The architecture I propose for the diamond's interior is a network of autonomous five-person workshops, each operating against a shared constitutional standard of correctness. The workshops are loosely coupled through Dunbar layers: three to four workshops per domain, three to four domains per strategic objective, coordinated by a small leadership team at the top of the diamond that maintains the constitution. This is a federated structure where coherence comes from shared principles rather than hierarchical oversight; which is the only coordination model that scales when every five-person team is producing at the level of a traditional department.
How Humans and AI Work Together Inside the Diamond
The question of how humans collaborate with AI inside these structures is not a single answer. It is a spectrum. In the full research paper, I describe four models that represent increasing levels of integration.
In the human-in-the-loop model, humans review and approve AI output after the fact. This maintains high human control and is appropriate for high-stakes decisions where errors are expensive. In the tiered review model, AI operates autonomously on routine work while humans handle exceptions, allowing organizations to scale without proportional headcount growth. In the centaur model, humans delegate specific tasks to AI while maintaining strategic direction; this is where most knowledge workers currently operate, using AI for research, drafting, and analysis while making the integrative decisions themselves. In the cyborg model, human and AI work in continuous, fluid interaction where the boundary between contributions blurs; this is the frontier, and it requires the highest level of human judgment and specification capacity.
The practical implication is that 'can this person work with AI?' is not a useful question. The useful question is: at what level of integration can this person operate, and what level does their role require? A person who can function effectively in human-in-the-loop mode is a different hire from a person who can operate in cyborg mode. Organizations that do not distinguish between these levels will misallocate their most important resource: the human judgment that makes AI output correct rather than merely plausible.
The Cultural Foundation
None of this works without the cultural substrate to support it. Intelligence abundance requires a set of mental model shifts that are more difficult than any technology implementation: from viewing AI as a replacement to seeing it as augmentation; from fixed knowledge to continuous learning; from hierarchical decision-making to distributed intelligence; from individual performance to collaborative success.
The most critical of these is psychological safety. In a five-person workshop where every member's judgment is amplified by AI and every piece of output is fully exposed, people must be willing to say 'this is not right' without political cost. They must be willing to admit uncertainty, flag errors, and challenge each other's assumptions in real time. Without this trust, the workshop ships AI-generated work that looks correct and is subtly wrong, and it does so at a velocity that makes course correction expensive. Psychological safety is not a cultural nice-to-have in the age of intelligence abundance. It is the verification mechanism that determines whether AI produces leverage or waste.
I have also found a structural parallel that I think is underappreciated. The four-phase change management sequence I developed for intelligence abundance transitions, preparation, engagement, implementation, and reinforcement, maps almost exactly onto Patrick Lencioni's four disciplines of organizational health: build a cohesive team, create clarity, overcommunicate clarity, and reinforce clarity. Two frameworks, developed independently for different purposes, converging on the same sequence. This suggests that the organizational health work and the AI transformation work are not separate workstreams. They are the same workstream, described in different vocabulary.
What Comes Next
This article is a condensed summary of a longer research paper that develops these ideas in full, including detailed collaboration frameworks, upskilling strategies, performance metrics for intelligence-abundant organizations, and case studies from organizations that have successfully made the transition.
Starting this Friday, I will be publishing a biweekly series called Organizational Design for the AI Era that applies this framework to specific organizational challenges: building a constitutional standard of correctness, restructuring for psychological safety, hiring for the AI era, and expanding organizational ambition beyond the efficiency trap. Each article builds on the intelligence abundance framework described here and connects it to the practical decisions that leaders are making right now.
The shift from intelligence scarcity to intelligence abundance is the most significant organizational transformation since the industrial revolution. It is not coming; it is here. The structures we built for scarcity are already breaking under the weight of abundance. The question is not whether your organization will change. It is whether you will design the change intentionally or have it imposed on you by competitors who did.
References
Kling, N. "Organizational Transformation in the Age of AI Intelligence Abundance: A Framework for Human-AI Integration." 2025.
Dell'Acqua, F., et al. "The Cybernetic Teammate: A Field Experiment on Generative AI Reshaping Teamwork and Expertise." Harvard Business School Working Paper No. 25–043, March 2025.
Dunbar, R.I.M. "Neocortex size as a constraint on group size in primates." Journal of Human Evolution, 22(6), 469–493, 1992.
Brooks, F. The Mythical Man-Month: Essays on Software Engineering. Addison-Wesley, 1975.
Lencioni, P. The Advantage: Why Organizational Health Trumps Everything Else in Business. Jossey-Bass, 2012.
Edmondson, A. The Fearless Organization. Wiley, 2018.