Artificial Intelligence and the Law
Comment
Stakeholder Type

Artificial Intelligence and the Law

Artificial Intelligence and the Law

AI is transforming science, markets and human capability — yet its most profound impact may lie in how it compels us to reimagine the principles and institutions by which we govern it. The future of AI is not only a technical question. It is fundamentally a normative and institutional one.

Anticipation is not only about racing to control AI before it becomes uncontrollable. It is about shaping the development of AI in ways that evolve alongside, and remain grounded in, our values, legal principles and social institutions. Rather than fixating on hypothetical end-points, we propose a pragmatic and generative frame: how can we build systems of AI under the rule of law that make our societies better, our decisions wiser and our institutions more capable of learning? And at a moment when the rule of law itself is under strain in many parts of the world, we should also ask how AI might be used — carefully and deliberately — to strengthen the very legal foundations on which open and just societies depend.

In that spirit, I outline three interlinked opportunities for anticipatory work at the intersection of AI and law:

  • the development of a next generation of guard-rails that support better decision-making;
  • the creation of learning institutions equipped for experimental and shared governance;
  • the reinvigoration of human flourishing as a guiding principle for science and technology.

Next-Generation Guard-Rails: Towards Better Decision-Making

The past decade has revealed not a failure of law itself but the consequences of misapplying outdated assumptions about how the law operates in complex, fast-changing environments. Too many regulatory efforts have been shaped by a narrow, static view of law as merely a tool for ex ante control or post hoc sanctions, resulting in rules that are rigid, fragmented and poorly attuned to the generative quality of digital systems. These approaches often remain reactive and superficial, addressing isolated risks while neglecting the deeper socio-technical dynamics that shape the development and use of technology. What they lack is not authority but design: the flexibility, embeddedness and iterative capacity needed to guide AI in ways that are context-sensitive and future-oriented.

What is needed instead is a new generation of guard-rails: not fixed constraints, but dynamic governance instruments that ensure the trajectory of AI remains in alignment with fundamental societal values. These guard-rails are not external controls imposed after the fact; they are part of the socio-technical infrastructure in which AI systems are conceived, designed and deployed, and must be built to support better decision-making, both human and automated.

This calls for governance mechanisms that are anticipatory, adaptive and responsive to feedback. Rather than aiming for exhaustive ex ante control, such guard-rails preserve agency and contestability within evolving systems. They combine legal certainty with procedural flexibility, enabling governance systems to learn from real-world effects while remaining anchored in legitimate boundaries.

Guard-rails, in this view, are neither technocratic fixes nor political stopgaps. They are enabling structures: legal and institutional arrangements that facilitate experimentation while preventing irreversible harm. They offer an affirmative vision of governance — not as a brake on innovation but as the foundation for rights-compatible, trustworthy progress. Developing such guard-rails is a frontier for anticipatory research, where legal theory, institutional design and technical architecture must come together in sustained dialogue.

From a legal perspective, a core challenge lies in reconciling contested and sometimes incompatible principles. For example, the many legal and philosophical notions of fairness — ranging from equal treatment to equity of outcomes — cannot all be operationalised simultaneously within AI systems. This raises hard questions about prioritisation, trade-offs and the role of law in mediating these tensions. Similarly, foundational principles of data protection, such as purpose limitation and data minimisation, may be in tension with the fundamentals of generative AI, which often relies on broad, repurposed datasets and large-scale model training. This creates a need for research into how such principles can be reinterpreted, adapted or safeguarded in ways that respect their normative core while accommodating the technical realities of AI development.

We are already seeing movement in this direction through adaptive governance mechanisms such as differentiated platform obligations for high-impact platforms, participatory risk assessment models that integrate stakeholder perspectives and iterative codes of practice bridging normative principles with evolving technical standards.

Learning Institutions: Toward Experimental and Shared Governance

Just as AI systems improve through iterative refinement, so too must our institutions. Anticipatory governance in the age of AI requires institutions that can experiment, learn and evolve over time without sacrificing democratic legitimacy or the rule of law. This is a challenging proposition but not an impossible one. Indeed, history offers numerous examples of institutions that were designed to grow into their mandates: from environmental agencies that expanded to address climate change to constitutional courts which adapted to digital rights.

Today, the urgency is to build institutions that are explicitly primed for learning. This involves more than simply scaling up technical expertise. It means rethinking institutional design to allow for provisional rule-making, open-ended deliberation and robust mechanisms of feedback and revision. It also means ensuring that the outcomes of experimentation are captured, interpreted and shared. Learning must be social and cumulative, with institutions adapting internally while enabling others across jurisdictions and disciplines to benefit from the lessons they generate.

This turns anticipatory governance into a distributed endeavour. Learning institutions become sites of both local experimentation and global exchange. They provide a bridge between practice and policy, translating technical developments into legal categories and civic expectations. They make it possible not only to govern what is known, but to co-create responses to what is new and uncertain. Emerging institutional formats such as anticipatory sandboxes and foresight labs illustrate how governance can be reimagined as a shared learning process, where experimentation is structured, monitored and scaled across jurisdictions.

Such institutions are essential not only for managing risk but also for stewarding shared understanding. They are the connective tissue between science, technology and society — capable of interpreting the social meaning of AI and fostering governance cultures that are reflexive, collaborative and open-ended.

Flourishing Humans: Toward Better Societies

The final opportunity — perhaps the most profound — is to ground the future of AI not in efficiency or performance alone, but in human flourishing. This requires a broader conception of intelligence, one that does not reduce value to accuracy of prediction, or progress to computational speed, but recognises the importance of trust, empathy, creativity, participation and well-being. The design-for-values movement has sought to embed societal and ethical principles directly into technology design, while responsible research and innovation frameworks extend this thinking into research governance, emphasising anticipation, reflexivity, inclusion and responsiveness. Both approaches have shaped how values enter technology development, but they also share persistent challenges: translating abstract principles into actionable design choices, reconciling competing values and ensuring legitimacy over time.

Recent thinking shifts toward adaptive, participatory and context-sensitive methods that treat values not as fixed inputs but as elements to be revisited and negotiated throughout the lifecycle of AI systems. This evolution aligns closely with the aim of embedding human flourishing as a guiding principle, making values not just a starting point but a continuous thread in design, deployment and governance.

In legal and democratic traditions, human flourishing is not merely a philosophical ideal. It is a normative bedrock. It is embedded in the principles of dignity, autonomy and equality that underpin the rule of law, and it offers a normative anchor for approaches such as design-for-values and responsible innovation. As AI systems increasingly mediate decision-making in education, health, employment and the public sphere, the challenge is not only to integrate these principles into technical design but also to protect and advance them through institutional and legal safeguards. This makes the pursuit of human flourishing both a design challenge and a governance lodestar.

Anticipating the future of law and legal systems in an age of advanced AI requires looking beyond technology alignment to institutional transformation. As AI reshapes decision-making across domains, it will also test the capacity of legal systems to uphold dignity, autonomy, equality and other foundational principles. This involves rethinking how legal doctrines are interpreted in data-driven contexts, how procedural safeguards like due process and contestability are preserved when decisions are automated, and how institutions can remain adaptive without losing legitimacy. Digital-rights frameworks, participatory governance processes and co-regulatory instruments such as living codes of practice offer early signs of how law can co-evolve with AI, embedding normative commitments into both system design and legal oversight. The challenge is to ensure that, as AI evolves, legal systems themselves become more capable of learning, coordinating across jurisdictions and stewarding human flourishing as a core purpose of governance.

To seize all three of these opportunities, we must integrate law, legal scholarship and governance deeply into the anticipatory science and technology agenda. This means treating legal thinking not merely as a compliance checkpoint but as a source of principles, procedures and institutional capacities that can guide AI’s trajectory from the outset.

Science anticipation in this sense must be interdisciplinary in substance, not just in form, drawing on the procedural insights of law, the adaptive capabilities of learning institutions and the design methods that embed values into technology. Priorities for research and practice include the development of guard‑rails that reconcile contested principles such as fairness and privacy, the evaluation of adaptive legal frameworks that can evolve alongside AI, and the design of institutional models capable of cross-jurisdictional learning and coordination.

By grounding science anticipation in the rule of law, we gain not only mechanisms for managing risk but a shared orientation: tools and traditions to guide us in asking not only what is possible, but what is just.

Further reading:

  • Gasser, U., & Palfrey, J. (2025). Advanced Introduction to Law and Digital Technologies. Edward Elgar Publishing. (Forthcoming)
  • Gasser, U., & Mayer‑Schönberger, V. (2024). Guardrails: Guiding Human Decisions in the Age of AI. Princeton University Press.
  • Gasser, U. (2025). Navigating AI governance as a normative field: Norms, patterns, and dynamics. In K. H. Jamieson, W. Kearney, & A.-M. Mazza (Eds.), Realizing the promise and minimizing the perils of AI for science and the scientific community (Chapter 5). University of Pennsylvania Press.
  • Gasser, U. (2024). Governing AI With Intelligence. Issues in Science and Technology, 40(4), 36–40.
  • Mayer‑Schönberger, V. & Gasser, U., (2024, September 16). A Realist Perspective on AI Regulation. Foreign Policy.
  • Gasser, U. & Mayer‑Schönberger, V. (2025). On the Shoulders of Giants: The Importance of Regulatory Learning in the Age of AI. Virginia Journal of Law & Technology, 28(1), 1–24.