The Problem
Every major crisis of our time shares one root cause: governance structures that were not built for the world we live in now. Democracy, capitalism, and socialism were each designed to solve specific problems in specific eras. None of them anticipated artificial intelligence, mass automation, or the speed at which power can concentrate in the 21st century.
This is not a failure of values. The values behind democratic governance (equality, accountability, representation) are still correct. What has failed is the architecture. The mechanisms through which those values are implemented were designed for a slower world, a world of industrial labor, national borders, and purely human actors.
“We live in capitalism. Its power seems inescapable. So did the divine right of kings.”
Before AI: The Globalization Gap
The stress on governance systems did not start with artificial intelligence. It started decades earlier, when globalization began moving supply chains, capital, and information across borders faster than national institutions were designed to track. A tax structure built for factory workers in one country cannot govern a company that books profit in Ireland, runs servers in Virginia, employs contractors in Manila, and sells to customers in Brazil. The OECD BEPS framework, a multinational attempt to fix corporate tax arbitrage, produced fifteen action points in two years of negotiations and has been stalled in implementation disputes for a decade since. That lag is not political incompetence. It is the structural consequence of building governance for a world that no longer exists. The social contract (work here, pay taxes here, receive services here) assumed that production, consumption, and residence happened in the same place. That assumption collapsed in roughly thirty years. The institutions built on it have not caught up.
The political reactions (nationalism, democratic backsliding, anti-immigration movements) are not irrational. They are what happens when people correctly sense that decisions affecting their lives are being made outside any framework they can influence, and they reach for the nearest lever still available: the national ballot box. The structural trap is that national ballot boxes cannot solve international problems.
Artificial intelligence does not create a new problem here. It accelerates an existing one by an order of magnitude and removes the remaining time for gradual adaptation. Every structural failure that globalization exposed in national governance (jurisdictional mismatch, regulatory lag, accountability gaps) AI exposes in international governance, at machine speed. The question is not how to govern AI specifically. It is why governance architecture capable of operating at the speed and scale of the problems we already face does not yet exist.
The Speed Mismatch problem velocity vs. governance velocity
The Human Value Gap: Governance That Sorts People by Passport
The most direct expression of governance failure is not abstract. It is visible in which lives receive institutional responses. A Ukrainian refugee in 2022 received emergency EU resettlement rights within days. An Afghan refugee attempting the same displacement received, on average, a multi-year bureaucratic process with no guaranteed outcome. A Syrian refugee attempting entry to the same EU in the same period was systematically blocked or detained. The three crises were simultaneous. The governance responses were structurally different. The variable was geography of origin.
Governance frameworks that assert equal human dignity while structurally sorting access to safety, services, and legal status by national origin have not resolved that tension. They have encoded it. The value of a human life, measured in institutional response time and legal access, is determined by the passport they hold. That is a governance design choice, not an inevitability.
Six Crises With One Root
Democracy Under Pressure
Electoral cycles run on years. Disinformation, financial contagion, and AI deployment run on days. The accountability gap between those two speeds is not incidental it is the failure mode.
Countries containing 70% of the world's population experienced democratic backsliding over the past decade.
Power Concentrates
Power concentrates wherever accountability is absent in corporate monopolies, state bureaucracies, and platform algorithms alike. Capital moves faster than law. State surveillance moves faster still. The shared structural failure: no constitutional limit that applies equally to all of them.
The top 1% holds more wealth than the bottom 50% combined.
Disinformation at Scale
Opinions are now industrially manufactured. The threat is not individual lies it is coordinated uncertainty that paralyzes deliberation and makes collective decision-making impossible.
False news spreads 6× faster than accurate reporting on social media.
The Social Contract Breaks
Work was never just an economic mechanism it was the primary source of identity, social belonging, and meaning for most people. Every welfare system, pension scheme, and social insurance program was built on the assumption that most adults would work, most of the time. Automation removes that assumption.
Up to 85 million jobs displaced by automation between 2020–2025 (WEF estimate).
Cultural Collision Without Structure
As migration increases and digital communication erases distance, different legal traditions, value systems, and governance philosophies are being asked to coexist without any shared framework for how to do so. The result is friction without resolution mechanisms, and communities without representation.
281 million international migrants globally in 2020 more than triple the 1970 figure.
UN International Migration Report, 2020
Read: Identity & Citizenship →AI Without Governance
Artificial intelligence is the first technology that can make decisions, allocate resources, and influence human behavior at scale without any human directly in the loop. Who governs AI governs everything AI touches. As of 2025, the honest answer is: nobody with democratic legitimacy.
0 binding international AI governance treaties as of 2025.
Crisis Severity Index
V-Dem / WEF / OECD composite hover for sources

The governance speed gap: institutions built for decades face threats that unfold in hours. The mismatch is structural, not incidental.
A Deeper Look
Why Democracy Is Failing Now
Democratic institutions were designed to aggregate preferences through deliberate, slow processes: public debate, representative assemblies, checks and balances. That slowness was a feature, not a bug. It prevented rash decisions and forced compromise. But it assumes that the information environment is roughly honest, that deliberation is possible, and that the speed of threats is human-paced.
None of those assumptions hold today. Disinformation moves faster than fact-checking. Polarization is industrially manufactured. Existential threats can materialize in hours: a pandemic, an AI arms race, a market crash triggered by algorithms. Institutions built for months are being asked to respond in minutes.
The Broken Social Contract
The social contract of the industrial era was roughly this: you work, you earn dignity, you survive. Work was not just an economic mechanism. It was the primary source of identity, social belonging, and meaning for most people. Welfare systems, pension systems, social insurance: all were built on the assumption that most adults would work, most of the time.
Automation is not just replacing jobs. It is dissolving the foundation that every welfare system was built on. No existing system has a structural answer to what happens when a majority of productive tasks can be performed more cheaply by machines. This is not a distant future problem. It is a problem that is already generating political instability in every industrialized country.
The Automation Floor: When Does Production Stop Making Sense?
The ILO Global Wage Report 2022/23 documented something worth stopping on: real wages fell in developed economies for the first time in decades, while productivity rose. The surplus went somewhere. It went to capital. That is not a political claim. It is what the numbers show when you put them next to each other.
OECD modeling puts 14% of current jobs at "high risk" of full automation and another 32% at risk of significant task displacement by technologies that already exist. Not future ones. The standard response is: new jobs will emerge, as they always have. That response worked during industrial automation because machines replaced physical labor. Cognitive labor was safe. What GPT-4 class models did in 2023 is expose that assumption as historically contingent. GPT-4 scored in the 90th percentile on the Uniform Bar Exam. That is not physical automation. That is something different.
The harder question is not whether jobs disappear. The harder question is what economists mostly avoid: what happens when machines are producing for machines? FANUC's factory in Oshino, Japan has been running lights-out since 2001. No humans on the floor. Machines build machines. US equity markets run predominantly on algorithmic volume: machine-to-machine trades, no human in the loop, GDP implications real. If you follow this to its endpoint, you get a system that is technically maximally efficient and produces no human welfare at all. GDP could rise while every human's quality of life deteriorates. The metric and the reality would have fully decoupled.
This is not a market failure in the conventional sense. Markets respond to demand. If consumers cannot earn money because their labor is worthless to the production system, there is no consumer to respond to. The system becomes self-referential: machines optimize for outputs that feed other machines. Efficiency becomes the only metric because it is the only metric that survives when you remove the human.
What remains (and this is not a consolation argument) is the one thing machines cannot have by definition: purpose. Machines are means. They optimize toward goals they are given. They cannot set goals. They cannot decide that a particular outcome is worth pursuing or that a different kind of society is preferable. Creativity, in the philosophically serious sense, is not generating outputs. It is choosing what outputs are worth generating. That choice requires someone to have a stake in the answer.
Equiplurism's response is structural, not inspirational. If the governance framework does not institutionally encode non-efficiency metrics (human participation, creative contribution, epistemic diversity), then the default trajectory is a system that optimizes those away. Not maliciously. Just because efficiency was the only criterion someone thought to write down. How identity, culture, and belonging factor into what counts as a non-efficiency metric is addressed directly on the Identity & Citizenship page.
Open question
At what ratio of machine-to-human economic contribution does the system stop generating human welfare? Nobody has a principled answer. The absence of a governance mechanism to even ask the question is the problem. Not the automation itself.
The Metric Problem: What Does “Success” Even Mean?
Harari makes the argument cleanly in Sapiens: if you measure biological success by the number of individuals alive simultaneously, the chicken is one of the most successful animals in evolutionary history. Billions of individual animals, on every continent, genetically propagated far beyond any prior range. By pure count, the chicken wins.
Those chickens live in conditions that meet any reasonable definition of suffering. The success metric and the lived reality have fully decoupled.
We do the same thing with GDP.
GDP measures the sum of economic activity. Not welfare. Not meaning. Not health. It grows when you build hospitals and when you build prisons. It grows when people are productive and when they are consuming antidepressants. A natural disaster requiring $200 billion in reconstruction grows GDP. A culture in which people care for elderly relatives at home without payment shrinks it. Simon Kuznets, who created the national income accounts in 1934, warned Congress in the same report: “the welfare of a nation can scarcely be inferred from a measurement of national income.” We have been using it to measure the health of civilization ever since, by default, because nobody put a different metric into the governance machinery.
Bhutan noticed. The kingdom formally rejected GDP as its primary national metric in the 1970s and developed Gross National Happiness (GNH) in its place: nine domains including psychological wellbeing, cultural resilience, ecological diversity, and time use. Not perfect, and not easy to measure. But asking a structurally different question: not how much are we producing, but how are people actually doing?
The Buddhist tradition Bhutan draws from makes a structural argument here, not merely a spiritual one. The concept of dukkha (unsatisfactoriness: the perpetual gap between what is and what we grasp for) is treated not as an individual emotional state but as a predictable consequence of how we organize pursuit. The Middle Way is not moderation for its own sake. It is the observation that optimizing toward any single extreme produces suffering, reliably, regardless of the extreme. E.F. Schumacher's "Buddhist Economics" (1966) formalized this as a governance variable: right livelihood, work that contributes to human dignity without requiring exploitation, is something a governance system can either support or undermine by structural design.
Before we can diagnose which governance systems are failing, we need to agree on what “failing” means. Most governance architectures never ask this question. They inherit GDP, optimize for it, and call any deviation from growth a crisis. The automation floor problem is exactly this: a system that maximizes efficiency while producing no human welfare is not a failure by GDP. It is a success. The metric and the goal have decoupled. Equiplurism proposes to write the success metric into the governance architecture as a constitutional variable subject to deliberation and revision, not inherited by accident.
How Power Concentrates
Capital has always moved faster than law. What changed in the last two decades is the order of magnitude: a technology platform can achieve global dominance and reshape political reality before any regulator has even defined what problem to solve.
In 2024, five US technology companies held combined market capitalizations of approximately $12 trillion, exceeding the GDP of any individual European economy, including Germany. That is not a policy failure. It is the structural outcome of governance frameworks built only to regulate market competition, not to constrain private power concentration, which is a different problem entirely. Apple, Microsoft, Alphabet, Amazon, and Meta do not merely compete in markets. They set the terms of what markets exist, what infrastructure runs them, and which information about political choices reaches which voters.
The mechanism that makes this self-reinforcing is worth naming precisely. These companies shape the regulatory environment through lobbying spend that dwarfs any civic countervailing force: in 2023, Meta, Google, Amazon, and Apple collectively spent over $60 million on federal lobbying in the US alone, per OpenSecrets disclosure data. They shape the personnel of oversight agencies through revolving-door employment: the FTC chair who understands digital advertising intimately because they previously worked for a digital advertising company. They shape the information environment in which political communication about their own regulation occurs: a platform company deciding what reaches its users about platform regulation is not a neutral actor in that conversation. Citizens vote. But the decisions that determine what information they see before voting, what work is available, and what infrastructure their daily life depends on are made by entities no election touches and no ballot removes. This is not conspiracy. It is the predictable geometry of letting private power accumulate faster than democratic accountability can follow, and then watching as the democratic frameworks themselves get reshaped by the power they failed to contain in time.

The self-reinforcing cycle: each stage of capture funds and enables the next. No single intervention point breaks it; structural separation must operate at all five nodes simultaneously.
AI: The Governance Gap
Artificial intelligence is not just another technology sector to regulate. It is the first technology that can make governance decisions (allocating resources, filtering information, scoring risk, setting priorities) at a speed and scale no human institution can audit in real time. As of 2025, the honest answer to who governs AI is: nobody with democratic legitimacy. Frontier model development is concentrated in four or five private companies operating under minimal external oversight. Nation-states compete rather than cooperate because the first-mover advantage in AI capability translates directly into economic and military advantage. There is no international framework with both the technical authority and the response speed to govern what AI is already doing today, let alone what it will be doing in five years.
The EU AI Act (2024) is the most serious legislative attempt so far. It applies to the EU market. It does not apply to models training in California, autonomous weapons systems, market-moving algorithms processing trillions in daily trades, or social scoring infrastructure already operating in several countries. Jurisdictional governance cannot contain a technology with no jurisdiction.
The second-order problem is less discussed but more structurally consequential. AI systems are not merely a governance subject. They are already governance actors. In the United States, algorithmic risk scores determine parole recommendations across most states, affecting hundreds of thousands of people annually. Automated hiring filters screen applications before any human sees them. Content moderation systems make tens of millions of speech decisions per day. Credit scoring algorithms determine housing access. Immigration risk classification systems determine who crosses borders. Algorithmic insurance pricing determines who can afford to live where. Each of these is a governance decision: it allocates opportunity, restricts behavior, and distributes consequence, made by a system that operates at machine speed, with no deliberation window, no public record, and no mechanism for the affected person to contest the logic. A governance architecture designed for human-paced decision-making is structurally incapable of providing accountability for machine-paced governance. This is not a gap that better regulations close at the margin. It requires architectural-level change: and the trajectory of what is coming is addressed directly in The Coming Wave.
One Structural Diagnosis
These six crises are not isolated failures. They are symptoms of the same underlying condition: governance structures built for the industrial era, slow and national and human-only, being asked to manage problems that are fast, transnational, and increasingly non-human. The values behind democratic governance are not wrong. The architecture that was supposed to implement them is.
The common root is a speed mismatch. Capital, information, and now decision-making itself have all outpaced the institutions designed to govern them. Each crisis above is a different expression of this structural gap, which is also why each crisis resists being solved in isolation. You cannot fix AI governance without fixing the democratic deficit. You cannot fix the democratic deficit without addressing how power concentrates. You cannot address power concentration without rethinking what counts as legitimate influence.
This is what architecture-level change looks like. Not a new policy inside a broken system. A different system, designed for the conditions that already exist.
These are the problems of today.
But the crises above are the ones we can already name and measure. Behind them is a second wave (space governance, AI ethics, corporate states, resource wars beyond Earth) that is not science fiction. It is the direct consequence of the trajectory we are currently on.