Anticipating the Geopolitical Impact of Advanced AI
Comment
Stakeholder Type

Anticipating the Geopolitical Impact of Advanced AI

Geopolitical Lens:

Anticipating the Geopolitical Impact of Advanced AI

Artificial intelligence (AI) systems are being deployed on a massive scale throughout the world. While the technology is still developing, it is clear that it will have equally massive potential consequences for a wide variety of sectors — and diplomacy and international relations are no exception. AI systems can act as force multipliers for both war and peace, and in the service of individuals, states, non-state actors — such as major technological players — and the international system.

In an unpredictable world, forecasting the risks and rewards of these technologies is a demanding exercise. With the rise of AI integrated into complex systems, and more recently the emergence of generative AI systems, the challenge of anticipation has become even more complex. There are two options for analysis: try to imagine the evolution of trends that are already apparent; or try to “think the unthinkable”, attempting to foresee developments for which there is as yet no evidence. Over the 10-year horizon, the first approach is the more reasonable — without altogether excluding the second.

Some trends are clear. The development and democratisation of AI systems, their evolution towards increasingly general and autonomous systems, combined with the massive use of social networks and the consequent networking of billions of people around the world, can only be a major cause of concern. In the realms of the “unthinkable”, we could imagine that, in a not-so-distant future, AI might become autonomous or even “conscious” (see topic 2.4: Future of Consciousness). In that event, we might, like numerous AI specialists, wonder about the potential existential threat these technologies represent for humanity.

But let us stick for the moment to the foreseeable. AI systems could be influential through their direct application towards conflict, or through their unintended or undesirable effects as they converge and cross over with other technologies and infrastructures. Conversely, they could play a role in maintaining peace and international equilibrium. Analysing massive quantities of data on theatres of tension or conflict could be critical in developing solutions for controlling or resolving them. Such intelligent surveillance systems could be used for early warning of conflict scenarios before they are evident (see topics 4.1: Science-based Diplomacy, and 4.5: Behavioural Science of Groups).

One of the major impacts of AI will undeniably be on communication. In political science, perceptions are a powerful vector of world understanding and therefore a key element which determines peaceful or belligerent behaviour. The actual capabilities of AI systems will be important, but so will the dialogues deployed to argue for the ways in which they are used. AI will represent a fundamental challenge to the status quo in two ways. First, they have a multiplier effect — enabling content to be produced and disseminated with greatly increased speed and scale. Second, they can be used to generate fake or misleading content which is impossible to distinguish from real information.

As AI is exploited by ever more experienced actors with diverse intentions, the combined effect will be to foster the emergence of competing claims to “reality” in which attackers become victims, self-defence becomes aggression and critical events can be presented in any number of ways to different audiences, massively relayed by social networks and the mass media. Thousands of AI-generated images and videos will flood digital space, with no filters to assess their veracity or provenance. The media, overwhelmed by data perceived as information and constrained by the ever-diminishing temporality of news, will relay oriented narratives without having been able to control their source or relevance (see topic 4.5: Behavioural Science of Groups).

As long as these systems are not perfectly autonomous, this capacity for influence will be exploited, sometimes by friendly actors wishing to support peace and international collaboration, and sometimes to inflame hate and generate conflict. The manipulation of opinions and the control of individuals, already historically practised, will be amplified and used to destabilise states or geopolitical areas and discredit individuals or public and private organisations. These narratives — whether peaceful or bellicose — will affect perceptions. Normative debates will become biased, and biased perceptions will lead to distorted analyses and inappropriate positions.

That will, in turn, influence behaviour: false information or synthetic data shapes perceptions and makes diplomatic work and political communication ever more complicated. Geopolitical players will have to come to terms with these competing narratives, with a significant impact on international stability. Diplomacy, largely driven by communication, will become distributed on a large scale among an ever-increasing number of players, whose influence will be broadened and amplified by information technologies and artificial intelligence.

As ideologised and arbitrary positions become established in law and policy, biased perspectives and new conflicts may arise. Today’s digital world takes Western perspectives on acceptable positions as pre-eminent, giving them a relative weight disproportionate to their real-world importance. The issue of bias — involving, say, questions of discrimination on the basis of gender or skin colour — is presented as a universal problem to be regulated, whereas it is not universally perceived as a problem. This itself introduces a bias into the treatment of the subject that could find its way into legal norms that would infringe the conceptions or cultural positions of many global actors. The imposition of these standards through diplomacy, largely led by the West, would therefore be arbitrary, based on a geographically and culturally circumscribed ideology. The greater the complexity of this discourse, the more diplomatic management of international relations becomes a high-risk challenge.

Generations who grow up in a world full of AI-generated and mediated content will develop new relationships with technology and information. Some will elevate AI systems to the status of technical demiurges, with transhumanist movements reinforcing the weight of AI systems by encouraging the augmentation of humans through these technologies. Civilians and soldiers alike will be increasingly connected via exogenous equipment such as augmented-reality goggles or exoskeletons and devices implanted directly in or on bodies (see topics 1.5: Augmented Reality, and 2.1: Cognitive Enhancement).

Ultimately — to return to “thinking the unthinkable” — humans may end up in direct connection with what Vladimir Vernadsky and Pierre Teilhard de Chardin termed a “noosphere”: a harmonised collective of consciousnesses similar to a super-consciousness, both natural and artificial. This will allow access to an effectively infinite mass of data that individuals will be unable to process efficiently: humans will be dependent on automated actors who will exploit this data for them and make it readable. The narratives they convey and promote will be revered as truths, regardless of any factual considerations. The human being will no longer be in, on or out of the loop: they will be no more than an element in a network, a cog in a complex system over which their control will have been reduced to a minimum.

Impacts on warfare

Artificial intelligence will profoundly transform the experience of war, affecting both military personnel and civilians, and raising major ethical and operational challenges. One obvious progression from today’s increasing reliance on remote warfare, using drones and heavily computerised weaponry, is to delegate decisions on targeting and deploying force to quasi-autonomous weapons systems controlled by AI. This will optimise and accelerate decision-making but raise new ethical and legal questions, such as calling into question the nature of human responsibility on the battlefield. Just as commercial and custom-made drones have become a weapon of war, so we can expect AI-controlled weapons systems to become widely used outside conventional military procurement, training and deployment. The resulting “civilianisation” of war will raise substantial ethical concerns, inducing a warlike bias within the population itself.

The military, for their part, will have to adapt quickly to these new technologies, even if the integration of AI systems leads to a reduction in manpower and a potential loss of skills inside the armed forces. Augmenting human soldiers through neuroscience and AI may provide new capabilities that compensate for this down-skilling, but will also raise concerns about privacy, individual autonomy and, more broadly, the ability of human action to influence its future and its environment. The race for technological supremacy in military AI will intensify between states and non-state actors, each seeking to gain economic and geopolitical advantage, including access to the scarce resources that enable the development of these systems.

The emergence of these technologies will thus reinvigorate power dynamics, defence strategies and military tactics. This will be amplified by stories of AI systems’ success circulating on social networks, while raising questions of sovereignty and technological dependence. A battle for normative influence will be waged in cyberspace, with the emergence of divergent narratives on how to respond to cyberattacks. The qualitative and quantitative explosion of AIs will challenge the established order and weaken international organisations and their credibility, favouring the rise of non-state actors with diverse agendas.

Indeed, the generation of diverse content that does not represent reality and is widely disseminated will help to discredit international organisations that are already struggling to establish their legitimacy. Fake pictures of massacres or atrocities will raise doubts in people's minds about the ability of major international organisations such as the UN or NATO to manage conflicts. We have already seen campaigns to discredit the United Nations, and in particular “blue helmets”, but also national contingents, seriously undermining the credibility of peacekeeping forces. The use of AIs to create totally artificial situations and disseminate them on a massive scale will weaken international organisations, foster instability and contribute to international destabilisation.

International law will be ill-suited to the realities of cyberspace and AI, deeply challenging the principles of humanitarian law. Technological developments in the field of AI and cyberspace require a degree of normative flexibility that international law is unable to offer. Moreover, if competing realities emerge, the law will not be able to deal with them all. The applicability of existing laws in the metaverse, for example, remains problematic, because the reality of this space is different from traditional reality. The rules applicable in the tangible world will not be adapted to worlds built around competing realities. The law will therefore not be able to adapt to worlds that are very different from one another, where a multitude of players with various agendas will try to impose their rules.

Faced with these complex issues, a new, solid ethical and legal framework will be essential to guide the development and responsible use of these technologies, while preserving fundamental human values. Multilateral efforts will be needed to establish international standards, involving all relevant stakeholders. But the regulation of AI systems, whether civilian or military, will come up against numerous obstacles, not least the risk of arbitrariness or cultural bias in the new normative frameworks. A “standards war” between major powers will make it impossible to reach a consensus, with the resultant threat to international stability. Influence campaigns will proliferate, themselves amplified by AI systems and their democratisation; these will shape the perceptions of the public and the international community when it comes to the social acceptability of military AI.

Impacts on peace

AI can also be used to promote peace on a global scale. Visionary individuals, operating as moral entrepreneurs, will be able to leverage their notoriety and influence and capitalise on the democratisation of access to AIs. Using the multiplier power of social platforms and cutting-edge technological tools, these actors will be able to convey a discourse of concord and considerably amplify their advocacy of peace. Their efforts could directly target state and international institutions, with the aim of influencing policies and changing mentalities. This innovative approach, combining the strategic use of AI and the mobilisation of social networks, has the potential to redefine peacebuilding methods, enabling individual voices or committed groups to have a significant impact on political decisions and international peace dynamics. It will also have a direct impact on people's perceptions and individual behaviour.

Soft power will thus incorporate AI technology as an essential component of its action, enabling states to strengthen their strategic autonomy. However, this technological breakthrough will amplify asymmetries between powers, particularly in terms of their ability to develop adequate infrastructures, access the necessary resources and upgrade the skills of their populations. The technological gap will widen, particularly in the military field, where AI-equipped weapons systems will offer a significant advantage to their owners, who will be able to put them to use in the service of international peacekeeping and maintaining security. But this situation of asymmetry will threaten the sovereignty of low-income countries, with implications for the management of peace processes and humanitarian actions.

Nevertheless, AI systems will offer opportunities for peace-building, through their use by state and non-state actors to disseminate peaceful messages, moderate belligerent content and develop non-lethal means of conflict resolution. By processing massive amounts of data, they will also be able to provide decision-makers with accurate analyses, taking into account all aspects of the conflict, and thus fostering the implementation of processes aimed at establishing or restoring positive peace. AI can also revolutionise early-warning systems, improve the effectiveness of peacekeeping operations and facilitate ceasefire monitoring. AI systems can also provide forward-looking scenarios for anticipating and resolving potential conflict situations (see topics 4.1: Science-based Diplomacy, and 4.3: Prediction, Foresight and Futures Literacy).

On top of this, AIs can help define inclusive norms conducive to peace, develop effective communication strategies and identify solutions to information asymmetry, enabling large-scale digital dialogues (see topic 1.6: Collective Intelligence). Finally, they will provide valuable information on the views of populations in conflict zones, as we are already seeing with drone warfare. Remoteness from the consequences of conflict reduces resistance to war, because the price to be paid is lower. Within this framework of positive peacemaking actions, AIs will facilitate dialogue between conflicting parties, and improve decision-making by analysing feelings and narratives.

On an international scale, major international and inter-state bodies will continue to take significant steps to frame and promote the responsible and beneficial use of AI. As part of a proactive approach aimed at harnessing the potential of AI to achieve the UN Sustainable Development Goals, its application will make it possible to prevent conflicts, supervise and control peace processes, reduce hunger through better management of food supplies and reduce tensions around strategic resources by optimising their exploitation.

Whether it is a question of soft or hard power, AI systems are force multipliers with undeniable potential. Whether from the point of view of individuals, states and non-state actors, or the international system, the impact of AI systems on peace and war will be multi-faceted and difficult to control. In the absence of real, definitive control, power will belong to whoever has mastered the narrative and can exploit it to their advantage.