Yet one foundational problem remains unresolved: AI systems cannot be legal agents.
Under every existing legal system, an AI cannot act as a director, manager, or entrepreneur. It cannot assume rights or obligations. It cannot enter into contracts. And it cannot be held accountable in the way human decision-makers can.
This structural limitation becomes particularly problematic in the context of DAOs.
DAOs Without a Legal Face Many well-known decentralized ecosystems — from Bitcoin's infrastructure to DeFi protocols such as Uniswap or MakerDAO — operate without a traditional corporate structure.
The early “TheDAO” project itself lacked formal external representation, relying instead on a Swiss entity (DAO.LINK) and external contractors to interact with the legal world.
In such environments, identifying “who is responsible” is already complex. When AI systems are integrated into DAO governance, the difficulty multiplies.
There is often:
- no centralized entity, - no identifiable leadership, - no accountable board, - and participants may be anonymous and globally dispersed.
If damage occurs, who can be sued?
This is not merely a theoretical question. It is a regulatory challenge.
The Abandoned Idea of AI Legal Personhood At European level, the possibility of granting legal personality to AI systems was briefly considered as a way to create a center of responsibility.
That path was ultimately abandoned.
Legal scholarship and EU institutions recognized multiple obstacles. Artificial agents cannot meaningfully fulfill the rehabilitative or preventive functions of sanctions. Criminal liability, for example, presupposes a subject capable of punishment in a normative sense.
The European Commission's High-Level Expert Group on AI clarified that there is no need to grant legal personality to emerging digital technologies. Harm caused by autonomous systems can generally be traced back to natural persons or existing legal entities — and where that is not possible, new liability rules directed at humans are preferable to creating artificial personhood.
In the DAO context, however, this reasoning encounters a structural difficulty: there may be no clearly identifiable human decision-maker to whom responsibility can be easily attributed.
The Limits of the “Human-in-the-Loop” Model EU AI policy is built around a human-centric paradigm. The “human-in-the-loop” framework seeks to ensure that AI remains under meaningful human oversight.
This model works well in centralized corporate settings. It becomes conceptually and practically problematic in DAOs.
The very purpose of DAOs is to minimize human intervention and distribute governance through automated and consensus-based mechanisms. Imposing continuous human oversight risks undermining their functional design. In decentralized systems, enhancing human control may conflict with the core logic of automation and autonomy.
The Responsibility Gap and the Black Box Problem Even reverting to the design phase does not fully solve the issue.
Attributing liability to programmers or developers becomes increasingly difficult as AI systems grow more autonomous. The causal link between input and output becomes opaque — a phenomenon widely referred to as the “black box problem.”
This opacity creates what legal scholars call the responsibility gap: a space in which harmful outcomes occur, but clear attribution becomes elusive.
The 2022 Artificial Intelligence Liability Directive (AILD) attempts to address civil liability challenges by introducing a relative presumption of causation. However, the injured party must still prove fault and establish a link between the damage and the AI system's operation.
In practice, this remains a significant evidentiary burden.
Moreover:
- The content of “duties of care” must be interpreted by national courts. - The definition of fault is not harmonized. - Divergent national interpretations may emerge.
Paradoxically, the attempt at harmonization risks generating fragmentation.
Transparency and Explainability Opacity is the core issue.
As AI autonomy increases, the distance between human input and algorithmic output widens. Deep neural networks, in particular, are characterized by nested logic structures that make linear traceability difficult.
ESMA's 2023 report highlights AI as a factor negatively impacting transparency in financial markets. One proposed solution is enhancing “explainability.”
Explainability can be understood in two senses:
- Narrow: technical traceability of the variables influencing output. - Broad: comprehensibility of AI decisions for human users.
Yet even this concept lacks consensus. What counts as sufficiently explainable remains uncertain.
In decentralized ecosystems governed partly by AI, opacity becomes not just a technical issue, but a governance and regulatory one.
The Broader Regulatory Question The integration of AI into DAOs exposes the limits of existing corporate governance and liability frameworks.
EU regulation has rejected AI personhood. Human-in-the-loop models do not easily translate to decentralized governance. Liability directives still rely on fault-based structures designed for identifiable actors.
The deeper issue is structural: our legal systems are built around accountable subjects. AI-driven decentralized infrastructures challenge that premise.
The next phase of regulatory evolution will likely revolve around one central tension:
How can responsibility be meaningfully allocated in systems designed to minimize centralized human control?
That question will define the future interaction between AI governance, decentralized finance, and EU liability law.
Further reading and in-depth analysis A more detailed discussion of protocol DAOs and regulation as infrastructure is available in the extended paper published on SSRN. Additional reflections on protocol governance and EU financial regulation are shared on LinkedIn.