AI Futures: Peace and War
Comment
Stakeholder Type

AI Futures: Peace and War

Geopolitical Lens:

AI Futures: Peace and War

Geostrategic security is in crisis at a time when urgent action on humanity’s intertwined environmental, technological and infrastructure systems is critical to long-term sustainability. The natural world, energy, food, water, raw materials, health systems and digital information networks are in volatile transition. Scientific discovery and invention bring revolutionary progress, yet often generate unanticipated systemic risks.

Artificial intelligence (AI) and other technologies amplify the challenges of the next decade and beyond, threatening to undermine prospects for peace. Yet specialist applications promise to bring solutions, not just to address the root causes of conflict, but to transform governance.

AI systems will act both as force multipliers in conflict and war and yet form part of a set of methods and tools that will revolutionise early warning and, in turn, create the system conditions and pathways to peace.

We believe that novel ways of thinking about possible futures — together with a new generation of knowledge engineering and AI-based simulation methodologies — have the potential to bridge gaps in understanding that are both systemic and strategic, spanning governance and diplomacy.

Indeed, AI-based predictive models and early warning on emerging core-infrastructure crises have the potential to transform the way we govern at national and international levels.

The more ambitious target is to frame governance and policy as system-level interventions set in long-term, scenario-based strategic frameworks. In our view, policy navigational models should focus on early anticipation and prevention of events, ultimately in real time — beyond the conventional, static scenario models and linear forecasts that have constrained governance for decades.

Only by simulating emerging events, possible shocks, surprises and “unknown unknowns” can leadership teams develop shared understanding and recognise the risks of future inter-systemic crises long before they begin to develop momentum.

The grand challenge is to reinvent governance to match the demands of a potentially chaotic world dominated by the cascading impacts of poorly understood, fast-moving events that threaten fragmentation and breakdowns in trust. Nowhere is the challenge as acute or urgent as in the deeply interconnected worlds of AI, peace and war.

There are many theories of how peace turns to conflict and war and how war ends and brings peace. Historians often talk about failures of understanding and failures of imagination. In his book The Earth Transformed (Bloomsbury, 2023), Peter Frankopan makes the case that “Much of human history has been about the failure to understand or adapt to changing circumstances in the physical and natural world around us.”

The expression “failure of imagination” dominates the narrative in reviews of catastrophic events over the last few decades, surrounding everything from 9/11 to Hurricane Katrina and the 2007-8 financial crisis.

In The Sleepwalkers: How Europe Went to War in 1914 (Penguin, 2013), Christopher Clark examines leadership behaviour in the lead-up to World War One and concludes that the protagonists were “sleepwalking”:

“They were not sleepwalking unconsciously, but all constantly scheming and calculating, plotting virtual futures and measuring them against each other… I was struck by the narrowness of their vision.”

The paradox is that humanity’s defining talent at the individual level is that consciousness is characterised by constant simulations of possible futures, as the latest models of neuroscience tell us.

In this context, imagined futures are cultural realities — mental simulations that shape decisions in the here and now. They are also contested and so a defining feature of uncertainty, driving everything from rivalry over “industries of the future” and risks of misjudgment and misunderstanding in crises, to competing visions about the future of AI.

These perspectives have growing relevance. Never before have world leaders faced multiple highly interconnected, fast-moving and accelerating threats. As interconnections in a system increase, so do complexity and uncertainty. Creating a shared social sense of reality — critical to political authority, trust and social stability — becomes increasingly difficult. Growing uncertainty feeds public feelings of disorder and threatens traditional leadership norms and institutional authority.

With the acceleration of AI, digital technologies, robotics and open media, the number of interconnections between people, places and things will continue to grow exponentially.

This brings us to some of the critical system-level variables and uncertainties, each with multiple outcomes that will shape the landscape of AI, peace and war over the next decade. To take a simple example, action on climate change may turn out to be “too little, too late”. Alternatively, exponential, system-level innovation in green technologies may create drastic cuts in emissions and herald the emergence of a secure, sustainable world.

Looking ahead, geostrategic disorder and chaos may become the new norm, with pervasive conflicts in cyberspace and over supply chains and natural resources. Integrated AI, quantum, neurocomputing and cyberwar technologies may become the defining source of tension and driver of conflict between major powers.

The pivotal uncertainty is the extent to which complexity, uncertainty and speed will overwhelm global leaders and governance institutions at national and international levels. In the absence of a widely shared sense of vision and purpose, a world of unintended consequences may perpetuate tensions between political, public, humanitarian and security interests and well-financed technology companies. Relationships between state and non-state actors may remain locked in conflicts over power and money.

As the digitisation of core infrastructure systems gathers pace, the unresolved threats to security and social stability may grow. Core infrastructures may become part of the volatile and widening digital “theatre of conflict” that extends from media and communications networks to command and control, all enveloped in a web of semi-autonomous AI systems that threaten to create their own realities, transcend boundaries and elude security and governance systems.

This is important because distinctions between machine intelligence and human intelligence are easily blurred. Machines can deliver human-like output but cannot be explained in human terms, even by their designers. We are vulnerable to treating AIs as if they were human, with human ethics, emotions, motives and intentions.

Another critical uncertainty is the extent to which AI will amplify the risks of misinterpretation, miscalculation and misjudgement, particularly in the world of military decision-making, peacekeeping and intelligence. At multiple levels, current AIs are opaque. Accuracy, novelty and “edge cases” are problematic. Transparency, explainability, provenance, authenticity and trust in mass-market AI products and services are elusive. They may remain so.

Alternatively, or in parallel, specialist high-trust AI systems and data networks may dominate “mission-critical” areas such as finance, insurance, infrastructure, aviation, shipping, health and security communications. Internationally agreed “guard rails” may embody rules about AI, cross-border cyberwar and disinformation. Much depends on the intentions of major technology companies that may, over time, focus on sustainable development.

Might we see the development of distributed digital worlds, regulated within global norms but controlled at state level, with “sovereign” data and AI systems managed within boundaries?

There are thousands of possible scenarios that may emerge over the next decade. In the scenarios we explore in the full web version of this essay, we draw on our library of simulation models that map multiple “high impact” variables, each with potentially extreme outcomes that will develop and interact in novel and often surprising ways over time. We illustrate just three: Dark Ages, Walled Gardens and Renaissance. These are deliberately extreme possible outcomes designed to inspire dialogue about what sort of world we want.

Any viable organisation must be able to cope with the dynamic complexity and uncertainty of its future environment. The same applies to states, cities, corporations and humanitarian agencies most concerned with creating the system conditions for peace.

The starting point is a shared understanding of possible futures. We define resilience as adaptive policies that work in even the most extreme possible scenarios. In practice, “adaptive” does not mean rapid responses to events but collective action on simulations and foresight in anticipation of crises. The alternatives lead inevitably to “too little, too late”.

As the scenarios illustrate, humanity has many choices between multiple alternative futures.

Renaissance describes one of many possible pathways to a more sustainable world. In this positive vision, AI systems have the potential, deployed with human expertise, imagination, inventiveness and a shared sense of purpose, to transform governance, policy-making and diplomacy at national and international levels and, in turn, prospects for peace.