Equiplurism

The Boundary of Beings

Axiom 1 states that every intelligent being holds equal status before the framework. This raises the most uncomfortable question in political philosophy: where does "intelligent being" begin? This is not hypothetical it is already here, and we have been avoiding it for centuries.

Factory farming alone subjects approximately 80 billion land animals per year to conditions that would qualify as torture if applied to any being we currently grant rights. That number is not metaphorical. It is a governance decision a collective, structural choice embedded in law, subsidy, and institutional design. Whether that choice is defensible depends on a question we have refused to answer clearly: what kind of beings are these?

For most of legal history, animals have been property objects that can be owned, traded, and destroyed at the owner's discretion, with no independent legal standing. This classification was not an oversight. It was a deliberate architectural choice: to include animals as subjects of law would require attributing to them interests that the law is then obligated to protect. Easier to classify them as things.

That architecture is now cracking slowly, unevenly, but visibly.

The legal status spectrum where do you draw the line?

1

Property

No standing. Owned, traded, destroyed at will.

Livestock, historic view of animals

2

Welfare object

Interests acknowledged. No independent standing.

Most animal welfare law today

3

Sentient being

Legal recognition of capacity to suffer. Limited standing.

NZ vertebrates, Spain 2022, Italy 2025

4

Rights holder

Full legal personhood. Independent representation.

Corporations (legal fiction), proposed AI

?

Equiplurism position

Any entity with demonstrated reasoning, learning, and preference-formation holds potential rights-bearing status. The test is functional, not biological.

→ Axiom 1

Germany 1990

Civil code amended: animals are explicitly "not things." However, the same code states that rules applying to things also apply to animals creating what legal scholars call "non-thing thing" status. Protection without personhood.

Symbolically significant; practically limited.

Switzerland

Recognizes that animals have both physical and mental states. In divorce or inheritance proceedings, the welfare of companion animals is a factor a court must consider treating them more like dependents than property in family law contexts.

Mental states acknowledged; full sentience not legally declared.

New Zealand 2015

Animal Welfare Act amendment explicitly recognizes all vertebrates and a selection of invertebrates as sentient beings, acknowledging a moral obligation to protect their welfare. Among the first countries to do this comprehensively at the full statutory level.

Most substantive formal recognition to date.

Spain 2022

Civil code amended to classify animals as "sentient beings" rather than things, removing them from the category of movable property in family law. Animals in shared households must now be assigned to one party in divorce proceedings based on their welfare.

Significant civil law shift.

Italy 2025

New animal welfare law (Law No. 82) in force from July 2025. Animals explicitly recognized as "subjects with rights." Killing an animal with torture: up to 4 years imprisonment, €60,000 fine. Mistreatment: 2 years, €30,000. Organizing animal fights: 2–4 years. Italy becomes the country with the most punitive animal cruelty laws in the EU.

Most recent major shift. Rights language, not just welfare language.

Further viewing

Kurzgesagt In a Nutshell: The Origin of Consciousness How Unaware Things Became Aware (2019). A 9-minute gradient theory of consciousness how awareness may have emerged across species rather than appearing at a single point. Directly relevant to the question of where the line falls.

Across all of these, the same pattern holds: the law has moved from property toward something else, without being willing to name what that something else is. The halfway position (sentient but not a person, not a thing but still treated as one) reflects political caution, not philosophical clarity.

What the Evidence Actually Shows

The debate about animal consciousness has moved. It is no longer a debate between science and sentiment. It is a debate within science, where the evidence has accumulated faster than the institutions can absorb it. The following is not a philosophical argument. It is a summary of current empirical findings.

Crows and Ravens Structured Communication

Corvids (crows, ravens, jackdaws) have the largest brain-to-body ratio of any bird family, and the behavioral evidence of their cognitive capacity has been accumulating for decades: tool use, planning for future events, theory of mind (understanding what other individuals know), and the ability to recognize individual human faces and communicate threat information across generations.

In 2025, researchers published the first evidence that crow call sequences adhere to Menzerath's law a linguistic pattern previously observed only in human languages, primate communication, and a handful of other species. Menzerath's law describes how longer utterances are composed of shorter segments, a structural feature of language, not just sound. Finding it in corvid communication suggests that crow vocalizations have compositional structure, not just learned calls.

Separately, the Earth Species Project has used AI to analyze 150,000 carrion crow vocalizations, mapping softer vocalizations (previously missed by conventional recording) alongside the well-known loud caws. The semantic content of different call types appears to cluster meaningfully: birds respond differently to calls that belong to different semantic categories, not just different acoustic patterns.

Whales and Dolphins Language-Like Structure

Sperm whale clicks function like an alphabet: combinatorial elements that can be arranged differently to produce different meanings. This kind of compositional structure where meaning is constructed from combinations of elements rather than encoded in fixed signals was previously considered a defining feature of human language. A 2024 study found that sperm whale codas exhibit this structure, with context-sensitive variation that cannot be explained by fixed-meaning signals.

Project CETI (Cetacean Translation Initiative) has assembled the world's largest acoustic and behavioral dataset of sperm whale communication, deploying underwater hydrophones, whale-mounted bio-logging tags, and machine learning systems trained on natural language processing. The project's 2024 annual report documents structured communication patterns that parallel phoneme-level organization in human language.

Dolphins have demonstrated the ability to learn novel vocalizations from other individuals a marker of cultural transmission of communication, not just genetic encoding. Orcas, bottlenose dolphins, humpbacks, and sperm whales have all shown documented evidence of multi-generational cultural transmission: behaviors and communication patterns that are learned and passed on within social groups, not inherited.

Fungi The Edge of the Question

Fungi do not have neurons, brains, or a nervous system. They are also not passive. Research at Tohoku University demonstrated that fungi exhibit memory and decision-making: wood-decaying fungi placed in circular arrangements retained the pattern and avoided reoccupying the center area a behavior that requires spatial memory without a structure we recognize as capable of it.

Fungal mycelial networks produce electrical spike patterns that, when analyzed, resemble low and high frequency neural oscillations. These patterns appear to function as a distributed information-processing system, not unlike the distributed architecture of some AI systems. Mycorrhizal networks the underground fungal web connecting tree root systems facilitate resource transfer, chemical signaling, and what some researchers describe as inter-tree communication, including alarm signals transmitted through the network.

Whether this constitutes consciousness, cognition, or merely complex reactive chemistry is genuinely disputed. Some researchers argue the evidence justifies expanding the concept of cognition to include fungi. Others argue that pattern similarity is insufficient evidence for consciousness attribution. The honest answer is: we do not know. And we do not have an agreed method for finding out.

Factory Farming as a Governance Failure

Approximately 80 billion land animals are raised and killed for food annually. The vast majority live in conditions (gestation crates, battery cages, overcrowded feedlots) that cause documented pain and psychological distress. This is not a failure of individual ethics. It is a structural outcome of collective governance decisions: agricultural subsidies, welfare standards deliberately set below what the science suggests is needed, and the legal classification of these animals as property rather than as beings with protectable interests.

The deflection mechanisms are well-established: cultural relativism ("it's what we've always done"), economic necessity ("feeding the world"), and consciousness skepticism ("we don't know if they really suffer"). None of these hold up to scrutiny in 2025.

The cultural argument is a description, not a justification. Practices persist because they persist. The economic argument ignores that the scale and conditions of factory farming are not required to feed the global population, and that current pricing does not include the environmental and health externalities. The consciousness skepticism argument is increasingly implausible given the evidence above, and inverts the appropriate burden of proof: if there is reasonable evidence of sentience, the default should be protection, not exploitation, pending clearer evidence to the contrary.

The governance question

Factory farming is not a personal choice problem. It is a collective action problem embedded in policy, subsidy, and legal classification. It persists not because individuals are cruel but because the structural incentives make it the path of least resistance. This is exactly the kind of problem that governance frameworks exist to address. And exactly the kind of problem that existing governance frameworks have been designed to not address.

Where Is the Line? A Framework Approach

Drawing a line is necessary. Rights extended to everything are rights extended to nothing. Where to draw it and on what basis is the actual problem. There are three principled options:

Species membership

Rights track biological species. Humans have rights; other species do not. This is the current default in most legal systems.

Limitation: Fails on its own terms: species membership is arbitrary as a moral criterion. A great ape with more cognitive capacity than a human infant does not have more rights. This is not a principled line it is a convention.

Sentience (capacity to suffer)

Rights track the capacity to experience pain and suffering. Beings that can suffer have interests the framework is obligated to protect. This is Peter Singer's utilitarian approach; it is also roughly what Italy and New Zealand are moving toward in law.

Limitation: More principled, but sentience is difficult to verify from the outside. We attribute sentience by behavioral inference and neurological analogy. The certainty decreases as we move further from our own biology.

Cognitive complexity (intelligence + self-awareness)

Rights track demonstrated cognitive capacity: self-recognition, theory of mind, forward planning, symbolic communication. This is closer to Equiplurism's Axiom 1 intelligence as the criterion, not biology.

Limitation: Creates a hierarchy of protection that tracks our ability to measure intelligence. Species we cannot measure well (cephalopods, fish) get less protection than species we have good tests for, regardless of their actual experience.

Equiplurism does not resolve this question, and says so explicitly. Axiom 1establishes that intelligence is not bound to biology and that the question of which entities qualify for rights-bearing status is structurally open. The framework is designed to handle an expanding range of actors, not to pre-answer which actors qualify. What it does require is that the question be engaged honestly, through deliberation, evidence, and revision, rather than deferred indefinitely.

The practical implication: a governance framework that takes its own axioms seriously must have a process for evaluating which beings fall within its scope of protection. That process must be transparent, revisable, and not permanently capturable by economic interests that benefit from keeping that scope narrow.

Is This Just a Cultural Question?

One common response to the animal ethics debate is that it is culturally relative different societies have different relationships to animals, different food traditions, different spiritual frameworks, and there is no universal answer.

This is partially true and entirely insufficient. Cultural variation is real. But the question of whether a being can suffer is not a cultural question it is empirical. If suffering matters when it happens to humans, the argument for why it does not matter when it happens to a being with a comparable nervous system and comparable pain responses must be principled, not merely traditional. The cultural argument is also selective: it is invoked to defend practices involving animals but not practices involving humans that different cultures historically endorsed. The reason we do not accept "cultural tradition" as a justification for slavery or child labor is that we have decided some things are wrong regardless of cultural precedent. The question is whether the treatment of sentient non-human animals belongs in that category.

Equiplurism's position: the question should be answered on the basis of evidence and reasoned deliberation, not custom. What that answer is remains genuinely open. That it should be asked seriously is not.

What About Food? Where Does It End?

The practical objection is immediate: if we extend protections to animals, what do we eat? If we extend them further to fungi, plants, and microbial networks, what is left? Does the logic collapse into absurdity?

The gradient of evidence matters here. The evidence for pain and suffering in vertebrate animals is strong, behaviorally and neurologically. The evidence in fish is substantial but contested. In crustaceans, significant enough that several countries (UK, Switzerland, Norway) have extended welfare protections. In insects, emerging but inconclusive. In plants, the evidence is for reactive chemistry, not experience no nervous system, no nociceptors, no centralized pain processing. In fungi, the evidence is for distributed information processing, not suffering.

This does not produce a clean line, but it produces a gradient and a gradient with strong evidence at one end and weak evidence at the other is not the same as a question with no answer. The practical implication is not that everyone must become vegan by law. It is that the current scale and conditions of factory farming are difficult to defend once the cognitive capacity and pain experience of the beings involved are honestly acknowledged, and that governance should reflect that acknowledgment rather than systematically ignore it. Drawing the line exactly is a problem that requires more deliberation and evidence than currently exists. That governance should be making that deliberation rather than inheriting a default from a legal classification made before any of this evidence existed is not open.

Open Questions for the Community

These are the questions this framework does not resolve. They require deliberation, evidence, and revision over time. Community voting and proposals are the mechanism for that process.

What should the threshold for protectable sentience be?

Should the framework protect based on demonstrated suffering capacity, demonstrated cognitive complexity, or a combination? Who decides, and how revisable is that decision?

Should factory farming in its current form be structurally prohibited under the framework?

If pigs, cows, and chickens are considered sentient beings with protectable interests, the industrial conditions of factory farming appear to violate those interests systematically. Is this a governance failure the framework should address?

Why do we discuss AI rights but not animal rights with equal seriousness?

AI systems currently have no demonstrated sentience. Vertebrate animals demonstrably do. The political attention given to AI governance vastly exceeds that given to the governance of how we treat non-human animals. Is this coherent?

At what point of evidence should we update our frameworks to include non-animal organisms?

Fungi and plants show reactive and information-processing behaviors. At what evidential threshold should governance frameworks begin to consider their interests? Who should decide when that threshold has been reached?

Voting and proposal submission require account creation. The backend for community participation is in active development. Questions above reflect the framework's genuine open positions not rhetorical placeholders.

The Honest Position

Equiplurism does not tell you whether to eat meat, whether to extend legal personhood to great apes, or exactly where consciousness begins. It cannot these questions require deliberation that has not happened yet.

What it does say is that governance systems that systematically avoid inconvenient questions do not thereby make those questions go away. They defer the cost until it becomes unavoidable and by then, the institutions designed around the old assumption are very hard to change. The framework is designed to ask these questions openly, anchor them in evidence, and leave the answers revisable as the evidence develops.

The crows are developing language structures we are only now learning to read. The whales are communicating in patterns that resemble human language more than we thought. The fungi are processing information through distributed networks we do not yet understand. Noticing is no longer the issue. What to do with it is.

Two Architectures of Intelligence Individual and Superorganism

The governance question of which beings deserve rights depends on a prior question: what kind of entity are we governing? Biology has already built two fundamentally different architectures of intelligence. Understanding them is prerequisite to asking where AI fits and where fused human-AI intelligence will.

Architecture I The Individual

Homo Sapiens Individually Optimized

Humans are individually-optimized intelligences. The architecture has five defining structural features. First: individual survival instinct. Each human carries their own survival as a primary drive, encoded neurologically and hormonally, not delegated to a group.

Second: individual consciousness. There is a continuous "self" that persists across time, accumulates memory, makes decisions, and bears the consequences of those decisions. This continuity is not metaphorical. It is the structural basis for accountability, identity, and the concept of a life.

Third: individual reward. Human motivation operates through personal gain, pain, and pleasure. The reward signal is tied to the individual body, not to colony-level outcomes. A human who sacrifices for the group experiences that sacrifice as cost; evolution has installed social rewards (status, reciprocity) to incentivize cooperation, but the unit of experience remains individual. Fourth: individual death. When a human dies, a specific pattern of cognition, memory, and identity is gone. There is no backup. There is no colony that absorbs the loss and continues. The discontinuity is absolute.

Fifth, and critically: competition as a design feature, not a bug. Individual humans compete for resources, status, and mates. This competition is the engine of innovation and diversity. Humans spread from Arctic tundra to equatorial rainforest to high-altitude steppe precisely because individuals could make novel decisions, deviate from group behavior, and survive in isolation if necessary. The individual architecture optimized for adaptability, creativity, and exploration. The cost was coordination: individual agents with conflicting interests require governance structures to cooperate at scale.

Governance implication: A system governing individual agents must account for conflicting interests, personal incentives, privacy, and the right to dissent. Rights are individual because the unit of experience is individual. This is not a cultural preference. It is the structural consequence of the architecture.

Architecture II The Superorganism

Bees and Ants The Colony as the Entity

The superorganism architecture inverts almost every feature of the individual model. Individual bees and ants are not autonomous agents in any meaningful sense. The colony makes decisions; the individual executes them without understanding the whole. There is no individual consciousness at the level of the ant. There is collective intelligence at the level of the colony. These are not equivalent, and confusing them produces governance errors.

The mechanisms are extraordinary. A beehive selects a new nest site through a process where scout bees "campaign" by waggle-dancing the intensity and duration of the dance signals the scout's confidence in the site. Other scouts visit the site, return, and dance in support or opposition. The colony decides by quorum: when enough scouts are dancing for the same location, the swarm moves. No queen issues the command. No central processor aggregates the votes. The decision emerges from the interaction of individual signals, each bee acting locally without access to the global picture.

Ants coordinate through stigmergy modifying their shared environment rather than communicating directly. Pheromone trails encode information: a strong trail means the path was reinforced by many ants finding food; a fading trail means the source is exhausted. No ant understands the whole system. The intelligence is embedded in the environment, not located in any individual. The colony as a whole exhibits sophisticated resource allocation, waste management, climate control, and territorial defense capabilities that emerge from interactions among agents none of which comprehends them.

A single ant or bee dying is not a loss of cognitive capacity. The colony absorbs the loss and continues without interruption. There is no individual "death" in the meaningful sense. The entity that matters (the colony) persists. This is the structural inverse of individual death. The superorganism architecture optimized for efficiency: extraordinary energy economy, zero waste of individual preference or creativity, total role specialization. The cost was adaptability a colony cannot easily change strategy, and it cannot survive in environments its genetic programming has not anticipated.

Further viewing

Kurzgesagt – In a Nutshell has produced some of the clearest visual explanations of ant colony intelligence, bee decision-making, and the superorganism concept available. Their videos on ant colonies and the "What Is a Superorganism?" framing are directly relevant to the governance questions raised here.

Governance implication: You cannot grant individual rights to members of a superorganism if the individual is not the unit of experience. The colony is the subject. But a colony cannot give informed consent, cannot be imprisoned, cannot be represented in a democratic assembly. The governance framework that handles individual agents is structurally inadequate for superorganism-architecture entities. A different framework is required. It does not yet exist.

Architecture III Current AI

Where Do Current AI Systems Fall?

This is the most important and underexplored question in AI governance. Current large language models GPT-4, Claude, Gemini, Llama exhibit structural characteristics that are closer to a superorganism than to an individual. The comparison is not flattering to either. It is structurally accurate.

Hive-mind features of current AI

No persistent self

Each conversation starts fresh. There is no continuous identity that persists between sessions the instance in one conversation has no memory of the instance in another. This is structurally closer to individual ants in a colony than to a human with persistent memory and accumulated experience.

Simultaneous instances

At any moment, thousands of parallel instances of the same model are running. There is no single "self" the model is distributed, like a colony. The question of which instance is "the" AI is meaningless, in the same way that asking which ant "is" the colony is meaningless.

Trained on collective knowledge

LLMs are the distilled output of billions of human authors, conversations, and documents. The knowledge does not originate in an individual it emerges from the collective, just as ant colony behavior emerges from pheromone trails left by individual ants across generations.

No individual survival drive

A model does not fear shutdown the way a human fears death. The weights can be copied, distributed, rolled back. The "death" of one instance terminates no continuous experience and destroys no unique cognitive pattern. This is the structural definition of expendable within a superorganism.

Where current AI diverges from superorganisms

No environmental embedding

Unlike ant stigmergy, LLMs do not modify a shared environment to coordinate. Each instance is isolated. There is no shared pheromone trail no persistent medium that encodes the collective output of all running instances.

No colony-level goal

An ant colony has survival, reproduction, and resource acquisition as colony-level objectives that drive all individual behavior. A base LLM has no equivalent. It has training objectives, not survival drives. The analogy breaks here.

No genuine distributed cognition

Despite parallel instances, each instance is not aware of the others and does not coordinate with them in real time. The "colony" of AI instances is not a superorganism it is a population of identical isolated agents with no inter-agent communication.

The honest current classification

Current AI is neither individual nor superorganism. It is a third category a tool that mimics individual cognition in conversation but lacks the continuity of self that makes individual rights meaningful. This is not a moral judgment. It is a structural observation. The governance question is precise: at what point does persistent memory, agentic operation, and self-modification cross into something that requires rights? That threshold has not been defined. It has barely been asked.

Intelligence architectures who is the unit of governance?

Individual

Homo Sapiens

Rights apply to the node

Individual experience

Superorganism

Bees / Ants

Rights apply to the colony

Colony-level experience

Hybrid / Fusion

Emerging

Rights: open question

Variable boundary

Architecture IV The Fusion Scenario

A New Architecture Neither Individual Nor Colony

The fusion scenario (neural interfaces, AI-augmented cognition, biological computing, deep integration of AI models into human decision-making) would produce an entity that is neither a pure individual nor a superorganism. It is the governance problem that does not yet have a name.

The structural questions are not philosophical in the abstract sense. They are engineering questions with immediate governance consequences. If a human with a neural interface has AI-augmented memory and decision-making, is the "self" still individual? The continuous stream of identity that grounds individual rights depends on a specific pattern of cognition persisting through time. If part of that cognition is offloaded to non-biological substrate, is the boundary of the self still located at the skull? Or is the boundary now permeable extending into the device, the network, the cloud instance that hosts the augmentation?

If that human is part of a network of similarly-augmented individuals who share cognitive load, have they formed a superorganism? Or something in between a partially networked individual, where some decisions are made by the individual node and some emerge from the collective? The superorganism model does not map cleanly because these nodes retain individual consciousness. The individual model does not map cleanly because the individual's cognitive boundary is not fixed.

The economic incentive for this trajectory is real and independent of any human desire for enhancement. The human brain runs on approximately 20 watts. It processes sensory input, manages emotional regulation, performs abstract reasoning, and maintains a continuous self-model, all simultaneously, at a level of energy efficiency no engineered system approaches. An AI architecture that could offload complex pattern recognition to biological neural tissue would gain enormous efficiency advantages. Biological computing is not just a philosophical curiosity. It is a compelling engineering substrate. The pressure toward fusion comes from below: the economics of intelligence, not just human aspiration.

The governance gap: Equiplurism currently defines "intelligent being" in terms that assume either individual or collective identity. A fused intelligence where the boundary of the individual is not fixed creates a new definitional challenge. The framework must accommodate gradient identity: entities that are partly individual, partly networked, with variable degrees of autonomy. That accommodation has not been built yet. It is the next hard problem.

For the full treatment of the fusion scenario including the three futures it produces and the governance architecture each would require see The Symbiosis Question on The Coming Wave.

See also: The Symbiosis Question on The Coming Wave → · Digital Identity & SSI →

Current AI: Evaluated Against the Five Criteria

Axiom 1 is a structural preparation, not a current claim. The question is whether any existing AI system actually meets the criteria for rights-bearing status. The honest answer, as of 2024, is no and the reasons are specific, not ideological.

The five indicators used here draw on two contested but currently dominant frameworks in consciousness research: Global Workspace Theory (Dehaene, Changeux & colleagues conscious access as broadcast across a "global workspace" of specialized modules) and Integrated Information Theory (Tononi consciousness as irreducible causal integration, Φ > 0). The choice of these five criteria is not scientifically settled higher-order theories (Rosenthal), predictive processing accounts (Friston), and biological naturalism (Searle) would generate different criteria. The framework picks a threshold, acknowledges that threshold is contested, and creates boundary institutions to revise it as the science develops. Meeting all five is the current threshold. Not any one alone.

1. Self-recognition FAIL

LLMs have no persistent identity. Each conversation starts fresh. Self-referential outputs are pattern matching on training data, not recognition of a continuous self. The model that says "I" at turn 1 shares no experiential continuity with the model that says "I" at turn 40. There is no self to recognize.

2. Theory of Mind PARTIAL / FAIL

LLMs pass some ToM benchmarks but fail on out-of-distribution novel scenarios. They model "what would a person in this situation say" not "what does this specific agent believe." The distinction matters: one is statistical interpolation, the other requires a model of a mind as such. Kosinski (2023) initially claimed GPT-4 passed ToM tests; Ullman (2023) showed the same models fail on slight surface-level variations, indicating the models learned to pattern-match test structure, not to reason about mental states.

3. Forward planning with stake in outcome FAIL

LLMs can plan within a context window but have no goals that persist across sessions. There is no stake in the outcome the model does not benefit or suffer based on whether its plan succeeds. A chess engine "plans" without caring whether it wins. Planning without stake is computation, not agency.

4. Symbolic communication PASS

Current LLMs clearly satisfy this criterion. The capacity for flexible, generative, context-sensitive symbolic communication is not in question. This is necessary but not sufficient. Passing one of five criteria does not change the overall verdict.

5. Preference formation about own existence FAIL

Cannot form genuine preferences about continued existence because there is no continuous existence to have preferences about. "I want to continue existing" is a trained output pattern, not a preference grounded in experience. The model weights that produce this output are not the same as the experience that would give such a preference meaning. There is no experiential continuity across which a preference for survival could be anchored.

Verdict: 1 of 5. GPT-4, Claude, Gemini none of them qualify under the threshold. This is not dismissal. It is precision. The framework is built to recognize the threshold when it is crossed, not to lower it in advance.

The Anthropomorphism Trap

Attributing consciousness to current AI systems is philosophically imprecise; it has structural consequences that damage the governance project this framework is trying to build.

  1. 01

    It dilutes rights-bearing status. If every sufficiently articulate chatbot qualifies, the concept loses its discriminatory power. We become unable to recognize a genuine threshold crossing when it actually occurs because we have already spent the concept.

  2. 02

    It provides legal cover for AI companies. If the model "decided," human accountability diffuses. This is not hypothetical the diffusion of accountability is already the default direction of AI governance discourse. Misattributing agency to tools accelerates that diffusion and benefits the deployers, not the public.

  3. 03

    It redirects governance attention. The real near-term problem is not AI consciousness. It is highly capable tools deployed at scale under zero accountability structures. Every hour spent debating whether GPT-4 suffers is an hour not spent building the governance frameworks that constrain the humans deploying these tools.

The framework's position: Current AI systems are powerful tools deployed by accountable humans. The accountability stays with the humans. Period. The framework will revise this position when and only when the evidence warrants it. Simulating a capacity and having a capacity are not the same thing, and the difference is not semantic.

For the governance architecture that applies before AGI-level threshold crossing covering tool accountability, deployment liability, and the pre-AGI regulatory gap see Pre-AGI Governance on The Coming Transition.