Science Lens: Future of research
Comment
Stakeholder Type

Science Lens: Future of research

Science Lens: Future of research

This report anticipates key challenges related to future developments in international science and technology research. It then outlines potential opportunities for action and concludes with a detailed analysis of individual case studies for these opportunities.

A central opportunity for science diplomacy is to foresee and manage the potential risks and rewards of rapid technological developments over the coming 5, 10 and 25 years. An inability to foresee scientific and technological trends, and how they may impact international society, will result in missed opportunities to direct such trends towards socio-economic development as well as geopolitical security, social well-being and planetary health. The accurate anticipation of scientific and technological trends can also grant a competitive advantage in the economic and political sphere at a particularly sensitive time in the evolution of diplomatic relations. In this report, we outline trends and challenges underpinning the process of research in science and technology. We discuss some of the emerging trends and challenges in scientific practice that we estimate will have major impacts on research and technology and thus international society and international diplomacy in the future.

The case studies highlighted in the final section of this report have been selected on the basis of four main criteria:

(1) they are areas of scientific research where emerging technologies are occasioning a dramatic shift (and in most cases an acceleration) in methods, outputs and prospects, with transformative implications for society at large;

(2) they are recognised as promising areas for innovation and discovery, with high levels of investment already directed towards these domains;

(3) they address key challenges for humanity and the planet, thereby constituting high-stakes investments; and

(4) there are high levels of uncertainty and risk around whether the promise of such technologies may be fulfilled, and whether this may prove beneficial to society and the environment.

This list is not meant to be comprehensive; other topics could have been included but were not due to limits in length and scope. We also had to restrict the depth of our analysis to some key points. The report thus aims to give a sense for the diversity of voices and areas where these topics come up, while acknowledging that there is much more to consider and do between and beyond the points we depict. Our considerations reflect our own positionality as minority world researchers working in a well-established academic organisation.

The findings of this report are grounded in mixed-methods research in the philosophy, history and social studies of science, as well as extensive academic engagement with science policy and research governance initiatives over the past decade. Our methods include: wide-ranging review of scientific literature and related policy reports; in-depth ethnographic study of specific research practices and environments, thereby acquiring a sense of opportunities and challenges at the cutting edge of innovation; surveys conducted to assess the perception of developments around openness, data management and AI among researchers; collaboration with an array of scientific groups and science policy organisations around the world; and engagement with scholarship in the history, philosophy and social studies of science on relevant trends, ongoing developments and future prospects of science and technology.

This section introduces five cross-cutting issues that underpin the key opportunities outlined below. The underpinning issues need systematic and systemic consideration, sensitive to local settings, to help address challenges at the international level and in a coordinated way.

[1] Fragile science

The health of the general research ecosystem impacts the quality of scientific research.

The pace at which innovation is currently pursued may be seen as part of a political decision-making and governance climate that generally pursues and prioritises “more” and “quick” innovation over “better” and socially equitable innovation. To prevent rushed policy, anticipatory intervention may aim to intervene in technological innovations as early as possible, potentially before they emerge, with critical collaboration among various public and private stakeholders about what social values and futures are pursued and who is expected to benefit. So policy-makers, diplomats, politicians and others may need to rely on an array of expertise to anticipate future technological innovations. An anticipatory type of leadership would give national governments and private corporations a competitive advantage: they can prepare to reap the rewards and minimise the risks of technological innovations in advance of others. For instance, anticipatory leadership promises to buy national governments and private corporations more time. This time allows them to forge earlier agreements that facilitate earlier and greater responsibility in technological innovations. They gain critical time to deliberate about which goals might be most scientifically and socially useful to pursue, and time to discover what types of policy approaches are most likely to work. Anticipatory leadership can alert public and private organisations to a range of future possibilities, a variety of pathways to realise them and what physical and human capital may prove useful along the way.

Scientific knowledge is fragile: it is produced with research practices that are carefully tailored to specific targets to answer precise research questions. So when the research questions, technologies and methods change, the research processes and the contents of scientific knowledge change accordingly in ways that are often unpredictable. In preparation for change, no public or private organisation can do everything all at once. Thus collaboration among research organisations and social groups with different expertise remains crucial. Judicious connections among public and private organisations are needed to prepare for the uncertain changes that technological innovations will create. A research ecosystem that is largely characterised by the research practices of a few dominant types of research organisation might be too fragile to adapt to uncertain changes because the dominant research practices may act as a filter on the types of scientific research that are feasible. The dominant research practices might be very competent at answering specific questions when the research questions are known in advance. However, the dominant research practices may not be well-prepared to answer a mix of questions when the research questions that need answers are uncertain and change. Public policy may help promote long-term goals to provide the physical capital — such as sustainable research infrastructures — and human capital — such as new technical skills and informed ethical reflection — that are critical to facilitate reliable and responsible scientific research and technological developments. A diverse research ecosystem that is open, transparent and collaborative empowers policy-makers, diplomats, politicians and others to more easily access a range of experts in organisations scattered throughout the ecosystem to respond well to the constantly changing circumstances of science.

[2] Diversity, inequity and collaboration

A diverse research ecosystem helps science prepare for uncertain futures and how social inequities can get in the way of knowledge advancements.

From a scientific standpoint, a research ecosystem that is populated with a mix of research communities pursuing various scientific aims, with a range of tried-and-tested technologies and methods available, promises to be robust enough to adapt faster and better to uncertain changes. A diverse research ecosystem allows research organisations to engage in critical collaboration with each other and produce more robust scientific research as a result. A varietyof research practices facilitates reliable research that might help to better inform policy-makers, diplomats, politicians and others responsible for effective governance. A diverse research ecosystem is prepared to provide quick and competent answers to whatever the future research questions might be. A diverse research ecosystem is more likely to already have various research organisations with judicious connections among each other already in place to manage the yet-unknown scientific and social effects of whatever technological changes might arise.

From a political standpoint, a research ecosystem that gives policy-makers, diplomats and politicians access to a diversity of perspectives is critical for cultivating informed and stable agreements (nationally and internationally) about what scientific and social aims to pursue and prioritise. A diverse research ecosystem may help science diplomacy to access the range of expertise inside and outside science that is critical for bridging the gap between evidence-based science and opinion-based politics. The inclusion of a diversity of perspectives allows for inclusive and subsequently stable policy frameworks. For instance, research collaborations and the co-production of scientific projects can help to cultivate social trust across geopolitical divides. This trust allows for robust research and inclusive scientific and social goals to emerge, which might help to inform and shape international agreements. This web of judicious connections empowers anticipatory leadership to more easily access the situated knowledge and organisational relationships necessary to translate policy aspirations for future science into actionable policy that can adapt quickly and competently when unforeseen scientific and technological changes arise. In return, policy-makers can more easily cultivate scientific discoveries and technological developments that meet a plurality of social needs as experienced by multiple publics in various times and places.

International collaboration on AI governance is particularly critical and needs easy access to a range of expertise to clarify how abstract principles of transparency, accountability and responsibility may be technically implemented within an array of social settings. For instance, bridging the digital divide needs the speedy and socially sensitive integration of technology across radically different and complex social settings in the majority world, aiming to facilitate inclusive innovation rather than allowing AI development to worsen existing inequities.Collaboration can aspire to balance the power of private corporations to set the agendas for AI development and manage conflicts of interest between private corporations and the array of publics that AI affects. The effective implementation of a general framework relies on situated expertise across multiple stakeholders, including national governments, civil organisations, private corporations, private investors, international organisations and others, to close the digital divide effectively and fairly.

International collaboration can shape how extraterrestrial space is governed. Among other considerations, international collaboration can shape how various nation-states and private corporations may use space resources, and how the international community may manage the various inequities in access to space among different nations. Effective space governance is presented with private space stations, an international race for settlements on the Moon, early plans for Mars settlements and the potential to mine asteroids near Earth. International collaboration can cultivate a climate of trust among nation-states, private corporations and international organisations, and provide a blueprint and a basis for a climate of trust in science diplomacy elsewhere. For instance, the European Space Agency was founded by 11 nation-states and has expanded to 22 nation-states today. A multiplicity of multilateral organisations allows for various small-scale projects rather than merely large-scale, high-budget projects. For example, cooperation among the 22 nation-state members of the European Space Agency on the Jupiter Icy Moons Explorer or JUICE mission heavily relied on extensive diplomatic trust-building. Moreover, space stations provide a promising site for science diplomacy. For instance, the United Nations (UN) and China have had a successful decade of science diplomacy to facilitate the effective use of their space stations.

Another area where international collaboration is critical is ocean management and particularly coral conservation. Data collection and sharing biodiversity across time is critical to protect biodiversity in ways that can feasibly accommodate a range of stakeholders and sustain the environmental, food and climate benefits of ocean biodiversity. Marine social science and marine science and technology studies are pioneering diverse research environments to produce more robust research on the scientific and social significance of ocean health. Similarly, diplomatic collaboration is also needed to facilitate effective interventions and protection efforts at regional and local levels. Without proactive management of ocean resources going forward, geopolitical tensions and the risk of self-destructive environmental damage will escalate.

The need for scientifically reliable research and the need for socially responsible research sit closely together. One reason why social inequities are a significant problem for scientific research is that they reduce research diversity. For instance, the inequitable access to resources can act as a barrier to entry for less-resourced organisations to participate in resource-intensive research. So the persistence of social inequities is a major barrier to cultivating a diverse research ecosystem because it reduces and distorts the participation of low-resource research communities. The material and social conditions of scientific research have both ethical and scientific significance. Diversity is often important for socially responsible research. The exclusion of some research organisations, especially low-resource organisations, is often unfair and based on largely economic premises rather than research excellence. Diversity is also important for scientifically reliable research, since many low-resourced organisations have distinctive expertise and access to natural resources (such as specific ecosystems or materials). The exclusion of different types of research organisation makes scientific research more fragile, vulnerable to the weaknesses of specific forms of dominant research. In a diverse research ecosystem, the different organisations might all have someweaknesses, but they do not all have the same weaknesses. As a result, political and diplomatic efforts to reduce the exclusionary effects of social inequities can reap the rewards of a more diverse research ecosystem and better avoid the dangers of a fragile research ecosystem in the face of uncertain technological changes.

[3] Open and closed science

Styles of governance have significant consequences for how well-prepared the research ecosystem is to adapt to social, environmental and technological change under conditions of uncertainty.

The main thrust of Open Science is to promote universal access to research processes and findings. However, in practice, public and private organisations must make various decisions about who can and should gain access to the multiple stages of scientific knowledge production and under what terms and conditions. A diversity-friendly vision of Open Science sees equitable material and social conditions as critical for ethical and robust scientific research practices. In the long run, science diplomacy can help to promote more equitable material and social conditions that enable the production of scientific knowledge to include a wider mix of research communities. In the here and now, science diplomacy might help to manage social inequities that potentially prevent various research communities from fully and fairly participating in the production of scientific knowledge. For instance, corporate regimes of intellectual property and the private ownership of data infrastructures can close down opportunities for critical collaborations among multiple research organisations which are necessary to co-producescientifically reliable and socially responsible scientific knowledge. Most recently, the exponential development and dissemination of AI throughout the research ecosystem is one of the latest stages of reproducing social inequities that can harm the epistemic diversity needed to produce robust scientific research.

Effective technology governance needs easy access to knowledge and expertise — and relationships that grant such access — about the expected scientific, diplomatic, political, economic, social and public effects of innovation and their potential to benefit national governments, private corporations, domestic populations and the wider international community at large. For instance, a web of judicious relationships among different experts and stakeholders across public and private organisations is critical for responsible public and private investment into new scientific ideas and instruments, and to transfer scientific knowledge into industry and the public sphere. So public policy can aspire to help reduce the causes of social inequities over time and may help low-resource scientific communities adapt to the consequences of social inequities. In return, this promises to facilitate more diverse collaboration that can take advantage of shared resources and more robustly analyse research findings from a range ofperspectives.

Of course, Open Science policy may aim to change the incentives to close the science/practice gap. However, the more power that specific research organisations gain to set the terms and conditions of collaboration and the co-production of science knowledge, the less robust the general research ecosystem as a whole may become. For instance, the sheer scale and internal complexity of big multinational corporations make them slow to change. So the dominant research practices of big multinational corporations might have a homogenising effect on the general research ecosystem as a whole and result in fewer opportunities for critical collaborations, with fewer organisations experimenting with alternative research practices. In contrast, organisational contestation can help to facilitate the information and incentives critical for making research bodies more able and willing to review and revise their research practices in the light of feasible alternatives. As a result, a diverse research ecosystem populated with a mix of small and medium-sized enterprises that are more able to adapt quickly and are more willing to take risks to grow is more likely to discover feasible technological innovations.

[4] Fragmented geopolitics of science

The geopolitical backdrop shapes scientific research and how scientific diplomacy helps science navigate the changing circumstances of politics.

The cosmopolitan nature of science is presented with ongoing disruptions to the geopolitical landscape, rewriting some of the central assumptions of the equilibrium that followed World War Two (such as international solidarity, focus on the public interest, transnational partnerships and open exchange of information), which allowed science diplomacy to transcend political divides. Nevertheless, two guiding principles for science diplomacy are to stick to science as key evidential grounding for decision-making and to promote rigour and reliability in research. This allows the best science and technology to retain significance and a degree of authority across fragmented political divides.

Since technological innovations in globalised markets are unplanned and therefore highly unpredictable, public policy must often follow rather than lead technological innovations. So reactive policy that largely responds to the latest innovations might be a natural consequence of such uncertainty. However, reactive policy is often bad policy, because it intervenes too late and in ad hoc ways that cannot cope with complex and fast-paced environments. In particular, reactive policy risks “policywashing”. In other words, reactive policy may be largely designed to accommodate whatever technologies already exist, with only minor adjustments. Once a new technology exists, it is very difficult for public policy to change it, given the vested interests and easily seen benefits.

As an alternative approach, proactive policy promises to empower public and private organisations to access the benefits of scientific knowledge early and anticipate risks quickly. A diverse research ecosystem can empower policy-makers to change from reactive policy frameworks to proactive policy frameworks that govern the uncertainty of technological innovation more effectively. In particular, a diverse research ecosystem can grant policy-makers greater and quicker access to more robust types of transdisciplinary expertise through external research organisations or internal “in-house” experts. For instance, science advisors can give politicians and policy-makers a basic understanding of what future technological developments may be feasible, while social scientists may help clarify the likely trade-offs and ripple effects of these trends, and researchers based in the majority world can bring key insights from their scientific and social perspectives. A diverse research ecosystem can empower policy-makers to cut through the noise and understand technological innovations from multiple standpoints that take various security, social, economic, political and environmental considerations into account. This can help to build trust between policy-makers and scientific researchers, and empowers policy-makers to quickly and confidently anticipate what technological innovations might arise and what their various effects might be in a language that is actionable. As a result, the shared social and economic advantages of proactive policy frameworks might help to cultivate cooperation and collaboration across fragmented political divides.

How to confront issues of ethics and accountability remains crucial, especially in the face of rapid AI developments. It might be tempting to accept a mechanised understanding of ethics, which can be automatically implemented within computational systems and is perceived to easily accommodate the need for ethical AI. General rules and regulations certainly are helpful in setting limits to the technology and its uses. However, ethical practice manifests in many everyday decisions taken by AI developers and users, and when dealing with such everyday practice, a general formula for ethics is neither possible nor necessary. Rather, contextual, case-by-case human judgements are indispensable to ethics. So ethical AI is never fully separable from human agency to decide how to design and use AI responsibly in specific situations. A governance climate that prioritises “quick” AI innovation hinders the ability of public and private organisations to adapt to AI developments reactively, for instance by taking the time to engage relevant publics and investigate long-term implications. As a result, proactive policy frameworks can aspire to cultivate a web of relationships among a variety of scientific and policy experts and a mix of research organisations throughout the research ecosystem to prepare the international research community for the complex ripple effects of whatever the next AI development might be. When a diverse research ecosystem is already in place, the international research community is prepared to develop the contextual human judgements critical for managing the uncertain scientific and social effects of AI developments across specific social settings quickly and competently.

In the case of space governance, international science diplomacy can build positive pathways towards proactive policy frameworks that put critical precautionary measures and research infrastructures in place. In return, proactive policy frameworks are necessary to reduce the misuse of space technologies for space supremacy and facilitate the future development of space technologies to effectively govern and coordinate novel uses of space by a growing number of public and private organisations. For instance, participatory community mapping processes might provide a blueprint for how collaboration among local and international publics can manage risks with effective territorial governance. However, the use of space is not like the discovery of more land, with space conditions radically different from terrestrial ones. Hence science diplomacy should rely on expertise in law, policy, industry, physics and astrobiology to understand how Earth-based concepts may apply and adapt to regions and uses of space.

In the case of ocean governance, a proactive policy framework that coordinates various stakeholders (including private funders and donors, national and local governments, and local communities and industries) and facilitates critical collaborations to understand what values and whose values should be used when evaluating the health of oceans — and particularly coral reefs — may build trust through enhanced transparency and accountability. In return, a proactive policy framework may improve the efficacy of conservation and resourcing efforts with more transparent and accountable decision-making that allows for timely and targeted interventions with streamlined resources and aligned priorities. Proactive ocean policy promises to prevent changes in ocean health from going beyond tipping points that may be harmful and very expensive or infeasible to heal. In particular, the experiences and perspectives of local populations at a regional level can help facilitate politically feasible and economically attractive conservation efforts. For instance, conservation efforts may support economic development. Capacity-building with technological transfers and training is critical to bridge the gap between technological developments and local needs. For example, mobile-phone apps can empower local communities to take effective conservation activities in coordination with public authorities. In return, economic and political barriers are more easily managed when monitoring and managing coral conservation takes advantage of local knowledge and local practices to align global and local interests.

Once science diplomacy has achieved national and international agreement on the basic aims of scientific research and technological development, proactive policy frameworks allow for early local governance that can more easily facilitate useful scientific research and technological developments and more easily prevent risky technological innovations. The promise of proactive policy is that new technologies are largely designed to pursue whatever basic aims national and international communities have already agreed to incentivise and minimise whatever risks national and international communities have already agreed to mitigate. This empowers public and private organisations to more easily prepare for the uncertain changes that rapid technological developments will bring across scientific and social practices.

[5] Research assessments, skills training and scientific education

Updating research assessments, science education and skills training helps science and society prepare for uncertain technological changes in and across various scientific and social settings.

Public policy often aims to promote standardised research assessments. This is useful for some scientific goals, such as the reproducibility of research results and the interoperability of data.However, it is important to remain sensitive to the array of scientific and social contexts within which different research communities conduct their research if and when a “gold standard” for one research community is applied elsewhere. In practice, scientists often use highly specialised methods and models that have evolved and adapted to answer specific questionswithin highly specialised subdisciplines. Moreover, scientists must often rely on the resources — technologies, software and data infrastructures — that their social circumstances allow them toaccess. A more localised approach to research assessments can better adapt to the particular scientific and social contexts of specific research communities and rely on judicious connections among neighbouring research communities to more fully take advantage of the targeted scientific expertise and local knowledge within neighbouring research communities to provide more accurate assessments of research quality.

There are good reasons for policy-makers to trust the judgements of neighbouring research communities about the quality of local research. Firstly, neighbouring research communities often have highly context-specific knowledge and know-how about their specific environmental and social context. Secondly, neighbouring research communities often care deeply about producing research that can benefit their scientific and social communities. So a universally applied “gold standard” is not always the best way to assess research. Local research communities can cultivate judicious relationships with neighbouring research communities to critically collaborate with each other and more accurately assess the quality of their respective research.

Transdisciplinary skills and research empower policy-makers to access a range of research communities to gain more robust evidence and a broader evidence base. In return, policy frameworks can more quickly and more competently govern scientific research and technological developments in the face of uncertain scientific and technological developments. Since scientific research is embedded within a variety of societies, scientific research has a variety of social effects as a result. For instance, science, technology, engineering and mathematics research does not exist in a social vacuum but in specific social settings. So scientific research and technological developments need much more than “the truths of science” to work well. The social effects of scientific research and technological developments show the need for deep transdisciplinary collaborations. For instance, social scientists — sociologists, political scientists, economists and others — may rely on qualitative research to help understand the more nuanced social effects of scientific research and technological developments. Moreover, arts and humanities scholars — historians, science and technology studies scholars, philosophers and others — may help frame the normative significance of scientific research and technological developments in specific settings and manage the difficult ethical trade-offs as they arise in the production and deployment of scientific knowledge.

Most significantly, transdisciplinary research can use engagement between experts and citizensto understand and use local knowledge and norms to adapt the development and implementation of new technologies to work well in and across specific social settings. A participatory approach that includes historically under-represented and marginalised communities as expert and non-expert participants might help to cultivate a collective sense of agency and responsibility for policy decisions regarding emerging technologies in a climate of mutual trust and recognition. For instance, public policy can aim to give local, regional and national publics access to the knowledge and technologies needed to prepare local communities for technological changes. For example, investment in early trials can uncover potential social impacts to prepare for large-scale uses. This may help public policy to protect against a patronising and paternalistic “parachute” style of science — and the backlash this tends to produce. Transdisciplinary research can help public policy to promote socially responsible science that embeds research practices in a web of judicious relationships that aim to meet various social needs as various publics see them.

Transdisciplinary education supports the public to develop the range of knowledge and skills they need to take advantage of rapid technological developments in the workplace as employers or employees, in the marketplace as investors or consumers, in politics as politicians, policy-makers or citizens, and elsewhere. Transdisciplinary education empowers the public to better understand how emerging technologies work and what their expected scientific and social advantages and disadvantages might be. So science diplomacy may help to balance the trade-offs between investing in technical skills in higher education and basic education in primary education, with educational pathways that promote greater access and career pathways that can help the public reap the rewards of emerging technologies more equitably. For example, public education can promote awareness of the value of coral conservation through grassroots campaigns and artistic initiatives like Voices of the Reef. The Citizens Forum at GESDA has started to contribute towards this aspiration. Rather than divorcing the development of emerging technologies from social considerations, a participatory approach embeds emerging technologies in an understanding of social needs and values from the start. In particular, the UN is an organisation with a highly inclusive membership, which gives it a distinctive type of legitimacy in making scientific decisions. This allows for human-centred innovations that put human needs and values at the front and centre of technological development. Transdisciplinary education promises to promote the socially responsible use of scientific evidence with more sensitivity to how social context can and should shape how evidence is selected, interpreted and (re)purposed in scientific research, public policy and public discourse.

Section Two: Opportunities for the future

This section summarises five of the lessons learnt from considering current and upcoming opportunities and challenges in research and development, such as the ones discussed in more detail within the next section (Section Three).

  1. Intelligent technology development for the long-term future: prioritise high-quality, resilient and reliable technology, even when this comes to the expense of using cutting-edge, seemingly convenient solutions whose longer-term impact has not yet been assessed. Invest in the systematic assessment of social and environmental impacts of any given technology, which should be part of any process of research and development.

o Example of possible action: support, monitor and formally recognise companies that commit to social and environmental impact assessment of the technologies they develop. Foster initiatives in environmental intelligence which harness the power of AI for responsible and sustainable environmental interventions, including systematic studies of their impact.

  1. Ethics at the core of research and education: ensure that ethics, understood as the capacity to evaluate and articulate the moral grounds and implications of one’s choices, is taught and monitored across training and development programmes in science and technology, making it clear to researchers that value judgements unavoidably underpin their work and therefore need to be explicitly monitored and articulated.

o Example of possible action: foster ethics training by specialised staff as a compulsory part of any scientific training programme, whether in publicly funded or private research institutions. Training in data ethics as an integral part of research design and application is a case in point.

  1. Education and infrastructures: invest in infrastructures and human skills to provide the capacity to address digital divides. Digital infrastructures alone will not be enough to encourage participation, no matter how user-friendly. The development of infrastructures such as databases or AI platforms needs to be complemented by venues and related funding for regular consultation, engagement and training, including in applied ethics, as explored below.

o Example of possible action: support business models for long-term funding and maintenance of data infrastructures; invest in initiatives that foster long-term engagement around the design and use of such infrastructures, with the aim ofenhancing their capacity to serve the needs and skills of the most vulnerable users.

  1. Governance and engagement: strengthen cooperation around local governance of technology for both public and private stakeholders, ensuring extensive and regularly staged forms of public engagement around technology development as well as its implementation in society at large.

o Example of possible action: establish and support communities of practice that bring together multiple stakeholders on an equal basis, thereby making it possible for researchers and users to discuss and improve technology design and implementation for the public interest.

  1. Intellectual property: foster an openness culture in upstream research and IP discussions around innovation, encouraging a research ecosystem in which sharing resources, data and skills is attractive, feasible and safe, and results in adequate compensation and recognition for contributors.

o Example of possible action: encourage private companies to disclose upstream research projects they are pursuing, so that even when data cannot be shared due to sensitivity, the topics being investigated are findable and relevant researchers can get in touch with each other where useful.

Section Three: Six key opportunities as case studies

This section outlines six key opportunities and related challenges which are likely to define scientific research and technological developments over the coming decades. Under each heading, we give a brief introduction and highlight the emerging issues likely to become critical in the near future.

Opportunity One: AI and the digital divide

This section outlines the various disruptions AI is already bringing to geopolitics. The 2023 AI Safety Summit in the UK and the 2025 AI Action Summit in France displayed the evolving geopolitical significance of AI, ranging from its transformation of the labour force (and associated elimination of countless forms of work taken over by advanced computational systems) to a shift in economic and political dominance associated with who has control of the technology. The development of AI contributes towards a scientific arms race for competitive advantage and scientific supremacy, with AI’s economic and military potential changing the balance of power globally. Sometimes referred to as the Fourth Industrial Revolution, the exponential development of AI capabilities is directly or indirectly related to the political and economic fates of national governments and the world as a whole. Science diplomacy promises to provide swift and safe access to AI across geopolitical divides.

Computing power

The development of the newest forms of AI, including most prominently large language models, is intimately tied to progress in high-performance computing (HPC). The newest frontier for such progress is the shift to quantum computing, which experts rate as realistic within a relatively short timescale of around 10 years or less. Such a shift will foster an enormous jump in computing power, ensuring market dominance for AI applications to anyone able to take advantage of it. Even more fundamentally, access to quantum computing may become indispensable to be able to engage with HPC in the first place, as the standards and methods required to operate quantum computers differ substantively from those required for traditional HPC.

This has two major implications:

(1) much investment and labour is required to convert existing coding, data and models into entities that can be run via quantum computing; and

(2) those who have the opportunity to train on quantum computers acquire skills that set them aside from those who can only operate more traditional HPC.

These observations reveal the urgency of confronting the question of who may be granted access to quantum computers and how. Failure to tackle this issue now may lead to a further widening of the digital divide in the future, with engineers and data scientists based in locations with no access to quantum computers at risk of being left behind. Such problems are already visible in current HPC systems, where most publicly funded supercomputers in Europe choose to give priority to large projects for access to their facilities — a choice that is understandable since those project have a scale (in terms of data, resources and impact) that suits the use of HPC, but which also makes it impossible for the vast majority of interested stakeholders to engage with HPC and acquire the related skills.

Connectivity

The UN envisions universal connectivity by 2030. The lack of universal connectivity has allowed the AI race to widen the digital divide. Inequities in access to connectivity are a significant barrier to inclusive development and allow exclusionary uses of science to remain the default.For instance, connectivity typically relies on three layers: an application layer with front-facing processes that access internet services, an internet layer with protocols that allow networks to communicate universally and a physical layer with communication infrastructure and links between them. Without universal connectivity, regions in Africa and other parts of the majority world rely on private corporations to provide undersea cables to support the physical layer of connectivity largely on their terms.

Without universal connectivity, it is more difficult to distribute the international benefits of AI development equitably. In practice, higher-resourced communities in wealthier regions can quickly capture most of the advantages of AI development. In contrast, lower-resourced communities in poorer regions often lack opportunities to influence the development of such technologies and the extent to which they may suit local needs, and thus stand to gain a smaller slice of the benefits in comparison. The relevant skills are asymmetrically distributed as well, with those able to access the relevant training typically belonging to wealthier classes with high educational attainments across societies, leaving already disadvantaged people behind. It is also important to note that universal connectivity does not necessarily mean high-bandwidth connectivity – and most AI tools require stable and powerful connectivity to work effectively, again disadvantaging those living with low-bandwidth settings. Moreover, data and programming infrastructures are needed for access to the training data for AI development, yet are very resource-intensive to maintain. For instance, scientific research and technological developments often change the formats, software and skills critical to maintain useful data infrastructures. So there is a trade-off between investing in preserving the utility of old data and investing in producing new data. The data-driven nature of AI development privileges higher-resourced initiatives financially capable of investing in preserving the utility of old data and producing new data. This again compounds the disadvantages of lower-resourced communities without such financial capabilities.

Safety and trust

While efforts to increase cybersecurity intensify, serious concerns remain in relation to the safety of AI, particularly given the prospect of its application via quantum computing. Hostile takeover of digital systems is a regular occurrence that all countries need heavy investments to counter, often targeting crucial infrastructures ranging from medical facilities to banking and consumer networks. The emergence of AI and its instantiation in quantum computing involves unparalleled societal dependence on HPC services, in turn providing an unparalleled opportunity to disrupt society in fundamental ways through control over digital services.

The opportunities for crime and abuse offered by digital systems generate mistrust of such systems among the population, particularly in relation to black-boxed systems such as AI applications. One way to promote trust in the safety of AI is with explainable AI or “XAI”: AI that is designed to provide explanations for the decisions it makes. This aims to reduce the risk of reckless or arbitrary AI decisions in sensitive situations. However, XAI is not a silver-bullet solution for at least two reasons. Firstly, the quality of XAI is hard to know because only a small number of experts have the highly specialised technical knowledge and skills necessary to understand the decision-making processes of XAI. So trust in XAI is not easy to cultivate since most people lack the ability to understand how it works and assess the quality of itsexplanations. Secondly, the highly resource-intensive nature of AI development allows the biggest private corporations to monopolise AI technologies and protect their property by making their inner workings inaccessible. This makes it difficult for others to scrutinise the reliability of the tools as well as to enter the market with new ideas and approaches. A monopolistic market gives scientists, diplomats, policy-makers, politicians and others a good reason to presume that market inefficiencies characterise AI development. Since research practices often lean towards what is easy rather than what is right, corporate monopolies may produce a suboptimal technology with significant negative externalities for users and others. In particular, AI development risks “safetywashing”: in other words, private corporations use the language of AI safety while their research practices continue to produce unsafe products that evade various accountability mechanisms.

Convenience

It is tempting to think that AI can tame the beast of big data. The international research community is confronted with a vast volume of large-scale datasets that scientists cannot easily incorporate into their research. The impressive computational power of AI, combined with sophisticated standards making it possible to bring together resources from around the globe, promises to make highly data-intensive research quicker and easier for human researchers, lowering the bar for the time and skills required to mine large datasets and make meaning out of existing information. In other words, AI promises convenience.

Unfortunately, convenience is more troubling than it looks. Of course, convenience is a generally uncontroversial good: nobody is for inconvenience. Nevertheless, how convenience is balanced among other generally uncontroversial goods is controversial. For instance, convenience can trade off against quality (think of food: a convenient meal tastes very different from the best meal). Recent research shows that “convenience AI” has unintended but foreseeable long-term consequences for research. In the short term, AI might be convenient for a range of research purposes and particularly to automate routine tasks. However, in the long term, research practices may be adapted to produce research outputs that are made to be convenient for AI, rather than the other way around. This might transform AI from a research facilitator into a research filter if research practices and outputs that are not convenient for AI are devalued as a result. This filter effect risks fostering a fragile type of science that is vulnerable to the significant weaknesses of whichever AI-convenient research practices start to dominate the research ecosystem without robust alternatives to counterbalance them.

Democracy and misinformation

In politics, the right to a voice can have as much significance as the right to a vote. A functioning democracy relies on a free and fair public sphere in which citizens review and revise their political judgements in light of reliable sources of information and good-faith discussions among their political allies and political adversaries. However, AI is transforming the terms and conditions of participation in the public sphere. In particular, AI and its implementation through digital platforms significantly change the knowledge environment within which citizens must operate. The digital age has transformed how information is stored, processed, shared, and manipulated — for better and worse. It changes how questions are framed, how answers are provided and how evidence is used. For instance, how AI works has become increasingly opaque. The speed of change produces a “black box” for both experts and citizens: neither the experts nor, as a result, the public always know how the algorithms work and why a certain result is provided. Outside the barriers of intellectual property rights and privacy rights, algorithms are not easy to access and understand anyway. So the experts do not always know precisely what the machines are doing. As a result, it is not always easy for the public to know the complex ways in which AI is misused and its various intended and unintended effects on the public sphere. This has significant ripple effects throughout politics.

Firstly, AI maximises the quantity of information the public can access. Ironically, this risks a less informed public because citizens become less able to easily access good information in a sea of conflicting information. Once AI makes so much conflicting information available, what information to prioritise and disregard becomes a much more politically significant decision. So AI makes information selection a major skill of public and private organisations to effectively manage the emerging problem of information overload. A major risk is that particular public and private organisations may select information for self-interested or ideologically partisan reasons. Most significantly, private corporations may select information for narrow commercial reasons, as national governments select information for narrow political reasons. So AI converts the evaluation and selection of information into a major skill for the public to develop in order to effectively manage the emerging problem of information selection.

Secondly, AI can minimise the quality of accessible information. For instance, misinformation can spread differently across different publics for various reasons. AI disrupts social interactions and social norms as AI elevates engagement, and engagement becomes a central currency in political discourse as a result. This unintentionally but foreseeably shapes how the political opinions of political allies and political adversaries are publicly perceived. For example, algorithms on social-media platforms typically popularise engaging opinions rather than representative opinions. As a result, representative opinions might appear less popular than they are, even if they are sincere and competent opinions. Similarly, when AI popularises engaging but unrepresentative opinions, unrepresentative opinions might appear more popular than they are, even if they are insincere or incompetent opinions. As a result, engaging but unrepresentative, insincere and incompetent opinions may be publicly perceived as more acceptable and representative than they are, while sincere and competent but less engaging opinions become marginalised.

Since the public must often make time-sensitive decisions with incomplete information, science communication through news outlets, social media, public figures, opinion leaders, journalists, podcasts and other channels can aim to inform the public of how to use new technologies responsibly and alert them to the risks of misuse. Fact-checking, such as that conducted by professional journalists, is one way to manage misinformation. Though relatively expensive when implemented at scale, this can be a powerful tool, particularly since human checkers can understand and evaluate the context and purposes for which a given piece of information was provided, and whether the intended interpretation of the given “facts” is reliable. More scalable but potentially less effective are forms of automated fact-checking such as AI-powered debunking tools. Debunking algorithms utilise generalisable, machine-readable, formalised forms of research to spot misinformation and swiftly correct it — an approach which is easily applied to vast regions of the internet, yet may be countered through misinformation tools programmed specifically to evade such checks. A concern common to both human-led and machine-led approaches is that fact-checking can be significantly slower than the spread of misinformation. One reason for this is that misinformation is often presented in ways that are more emotionally engaging than the facts.

Another way to manage misinformation is evidence-following. It is tempting to think that politicians should follow the evidence. However, evidence is not facts. Evidence provides reasons to accept factual judgments. In practice, good evidence can still conflict and mislead. One reason for this is that evidence is situated: it is produced in a specific scientific and social setting to achieve specific scientific and social goals. So when the scientific and social goals change, the quality of the evidence can change. As a result, the facts are often uncertain in the time-sensitive circumstances of politics. A similar way to manage misinformation is expert-following. It is tempting to think that politicians should defer to the experts to tell them what evidence to follow. However, in practice, experts and other scientific authorities can disagree in and across disciplinary boundaries about the quality of the evidence for particular policy agendas. So the experts are often deeply disputed in the adversarial circumstances of politics.

As a complimentary approach, narrative-checking is a promising alternative way to manage misinformation. One reason why some types of misinformation gain popularity rather than others is that narratives structure the meaning of new information. So misinformation can take advantage of popular narratives among various publics, which attach distinctive meanings to some types of misinformation rather than others. As a result, narrative-checking adopts a situational approach towards misinformation and highlights the variety of ways in which meaning is attributed to data, given the specific scientific and social circumstances of various publics. A situational approach shows that transdisciplinary engagement is often critical for policy-makers to fully understand the causal roles that situational factors such as narratives can play in the spread of misinformation.

Opportunity Two: Governance of space technology

This section outlines how the geopolitics of space is evolving with significant consequences for national security, international cooperation and the environment. At an international level, various international treaties, agreements and conventions govern the use of space. As many more nation-states and non-state actors have entered space, national governments have produced and adapted various regulatory frameworks to manage the evolving commercial uses of space. However, international space governance has not adapted as quickly to facilitate and manage the evolving and emerging uses of space. In particular, the bilateral Cold-War-era frameworks need to adapt to the multilateral nature of space use today by various state and non-state actors. Science diplomacy is critical to effectively govern the gap between old international legal and policy frameworks and new technological developments to help manage legal and technological uncertainties. In particular, science diplomacy can facilitate fruitful cooperation among a range of public and private organisations to reap the various social rewards that space promises. Most significantly, science diplomacy can prevent the mismanagement and strategic misuse of space technologies that could result in space becoming unusable and in environmental and connectivity problems back on Earth. This would eliminate the huge opportunities for social and economic development through non-militarised space technologies.

Space resources

In the not-too-distant future, it is likely to become technologically feasible to extract various natural resources in space. The commercial feasibility of low-cost launch vehicles and reusable rockets has significantly enabled the development of contemporary space technology. For instance, space mining promises to extract water and precious metals and minerals from nearby asteroids. Similarly, space may provide an effective source of solar power. Access to space resources provides a possible way to promote economic growth with a greater supply of natural resources and to protect the environment without the need to extract natural resources on Earth. This would need the technological capabilities to process space resources in space and, most importantly, to dispose of waste responsibly to reduce the costs of transporting junk back to Earth and prevent the pollution of processing the resources on Earth. Waste disposal may need to be elevated as a concern in this area, as it is not currently prioritised as a research focus (the emphasis being more on the opportunity to extract than on the environmental implications). From a technological standpoint, the gravity barrier remains a significant barrier to the technological feasibility of space-resource extraction. Moreover, from a commercial standpoint, space mining and water harvesting need significant upfront investment and long-term planning. Hence the potential of space-resource extraction may exceed the frontiers of contemporary technology. Nevertheless, it is not an impossible expectation for the near future.

Satellite governance

As space has become more accessible, it has become much more crowded. No particular satellite constellation in isolation is especially significant, but the cumulative effects of many satellite constellations in aggregation are becoming very significant, with highly unpredictable patterns of behaviour emerging. The growing accessibility of space has seen more public and private organisations use space for various purposes. Satellite technology is used across an array of areas, including military and security activity, agriculture, shipping, digital communication, climate change, natural disasters and cosmological research. Many more public and private organisations are able and willing to put the newest satellites into the best places in space. Rather than merely one or two nations in space, most UN member states can access space with satellites as well as many private corporations. For instance, there are five nations on the Moon, and China’s and Russia’s space programmes aim for settlements in the 2030s.

This has caused political concerns around sovereignty over space as a territory and how this may be governed and regulated in the future. There are also significant environmental concerns about the continued usability of space, given the growing scientific, cultural, commercial, civil, civic and military uses of space. The more satellites orbit Earth, the more space risks overpopulation with more direct satellite-satellite collisions and indirect satellite-debris collisions (the so-called Kessler syndrome). The number of active satellites is expected to grow from around 2000 in 2018 to 100,000 or more by 2030. Since low Earth orbit reduces launch costs and latency times, low Earth orbit risks overpopulation. Greenhouse gases might further reduce how many satellites can populate low Earth orbit. Satellites often need replacement every five years, and relaunches often fail — thus becoming part of the problem of waste disposal already identified in the previous section in relation to extraterrestrial resource extraction. Moreover, anti-satellite tests intentionally destroy satellites. This increases debris with orbital speeds that allow very small pieces to cause significant harm. A collision cascade could make space largely unusable for most public and private organisations.

Misusing space

Space risks both unintentional and intentional misuse. So international space governance needs well-defined and well-enforced rules to coordinate the complex uses of space by multiple organisations and prevent its unintentional and intentional misuse. This puts pressure on science diplomacy to facilitate the production of feasible and fair frameworks that manage liability issues, debris clean-up and traffic rules to decide who is entitled to use what regions of space for which periods and under what circumstances to preserve the usability of space. Without such frameworks, there is a risk that the most powerful public and private organisations will monopolise space to gain space supremacy and dominating the uses of space.

International space governance contains international treaties, a growing body of soft law that is becoming legally binding and national legislation that strengthens international commitments. Nevertheless, international space governance lacks a robustly proactive framework of general rules to guide decision-making and prevent collisions, cyberattacks and conflicts. One major problem is that monitoring or enforcement mechanisms for violations of international rules are difficult to deploy effectively. They are potentially technologically infeasible or prohibitively expensive, and harsh punishments could have unintended chilling or adversarial effects. Intentional misuses of space through covert sabotage and other hostile acts are hard to prove when much of the evidence drifts in space and without eyewitnesses to the crime. As a result, science diplomacy may provide a critical channel to enforce international legal and policy frameworks more sensitively and facilitate more effective international space governance.

Opportunity Three: Responsible and equitable environmental technology

This section outlines how geopolitics might make use of environmental technologies to prevent and adapt to pressing environmental issues such as more frequent and intense droughts, ecosystem collapse and food insecurity. A geopolitical climate that generally prioritises environment-friendly technological developments would make various technological interventions into pressing environmental issues feasible. However, technological interventions into pressing environmental issues often risk unintended and unforeseen ecological consequences. So emerging environment-friendly technologies need careful collaboration to develop and deploy them responsibly and equitably. In what follows, we cover some of the issues characteristic of this area. We do not delve deeply into the impact that AI may have on each of these issues and related environmental concerns, as this was briefly discussed in the AI-related section, and the transformative implications of computation could go in different directions depending on political and economic priorities over the coming five years.

GMOs, genetic resources and in silico biology

Continuous advances in the genetic manipulation of organisms, including most famously the opportunity for precision bioengineering provided by CRISPR-Cas but also the enormous progress made in deploying RNA to change developmental patterns, have opened the gate to an ever more extensive landscape of possible applications. Well-established techniques to grow cell cultures in the lab have given way to the opportunity to grow tissue and even organs under artificial conditions. Such technology supports the production of organoids and biorobotic hybrids for specialised purposes within biomedicine and beyond (think about the use of biorobots for surveillance, risk assessment or specialised construction). At the same time, animals are progressively bred in ways that make them ever more useful to humans, including as organ donors (in the case of pigs, for instance) and highly specialised forms of nutrition. In vitro experiments have proven ever more effective in producing new vaccines and treatments. And AI is providing countless opportunities to accelerate and expand scientists’ ability to mine and analyse large masses of heterogeneous data, the case of AlphaFold being the most glaring in how such a tool has changed and facilitated the investigation and artificial reproduction of protein folding.

Such developments raise questions around safety at multiple levels. First and foremost, there are concerns around the health and ecological implications of such biological manipulation, with many researchers worrying about the lack of understanding (and related investments) of the long-term effects of bioengineering for individuals as well as communities and ecosystems. Technologies such as gene drives, for instance, introduce permanent changes to the ecosystems where they are unleashed by taking over the wild population and ensuring the spread of specific — and hopefully desirable — traits. The long-term, systemic implications of such interventions need to be investigated before they are unleashed. A second key question consists of the worries around omni-use, particularly in cases like the weaponisation of research on pathogens, which has already come in focus in relation to debates around the origins of COVID-19 and continues to be carried out by military laboratories around the world, creating potential danger for all. Again, gene drives constitute an excellent example of a technology that is easy to weaponise and, once unleashed, almost impossible to take back. Less technologically cutting-edge but equally dangerous are species transfers across territories and continents, with a vast amount of insects and pathogens travelling to novel countries and wreaking havoc on local agriculture and public health. Such transfers are subject to strict monitoring and regulation in most countries, but continue to happen despite all attempts to eliminate them.

Agrotechnology

A geopolitical climate that pursues and prioritises environment-friendly technological developments can provide the ability for quick adaptations to rapid environmental changes. For instance, emerging technologies enhance drought resilience through a combination of molecular manipulation and traditional breeding, often applied to orphan or local crops not yet available on the global market (such as yam or cassava). Orphan or local crops with high levels of stress tolerance could enhance food security most significantly in regions prone to environmental stresses. One example are “resurrection” plants, which show a natural solution to extreme drought: they protect themselves from cellular damage during drying. Resurrection plants desiccate so much that they contain no water, but they are still alive, appearing dead but springing back with the first summer rains. Similarly, the discovery of early light-induced protein (ELIP) genes shows how plants can protect themselves from light damage. These genes emerged independently and convergently across resurrection plants in different environments, showing that nature has engineered resurrection plants multiple times. This has promising applications for crop resilience, biodiversity conservation and medical innovations. For example, it can help to improve vaccine storage and could help with interplanetary travel in the future. Biostimulants show success in small-scale interventions. However, complex gene networks are a barrier for large-scale applications.

Finding new crops or improving existing species to favour environmental resilience is not the only strategy ahead, however. Another crucial strategy likely to become ever more widespread is the adoption of crops that are novel for a given territory, but well-known in others. This is particularly effective given that regions close to the equator are becoming too hot for the cultivation of species such as tropical fruits, while previously temperate regions such as the Mediterranean basin are now becoming hot enough to host such cultivations (as in the case of Greece recently becoming a producer of mangoes and papayas). Similarly, wine production is shifting north, with England and many Scandinavian countries starting to produce wines modelled on milder parts of Europe such as France, Austria and northern Italy.

Complex food challenges require transdisciplinary work rather than highly specialised silos. In the examples above, researchers need to cooperate with farmers and breeders from other territories, as well as researchers specialised in different parts of biology and other crops, to be able to support agricultural development in their own region. The need to accelerate scientific breakthroughs and employ novel methods such as AI can have two opposite effects: on the one hand, it can encourage ever more targeted and specialised interventions, with ever less awareness of the broader ecological and social impact, and less contact with scientists who have the relevant expertise to explore such impact; on the other hand, it can discourage siloed research and encourage transdisciplinary cooperation. Diplomacy can promote the latter solution over the former, thereby ensuring that appropriate forms of local knowledge and relevant expertise are brought to the development and assessment of novel agricultural solutions for the benefit of all involved.

The precautionary principle prioritises proactive prevention over reactive adaptation. In practice, this demands data and precise foresight. A systems approach is critical to look at interdependencies in supply chains to prevent systemic breakdowns. This can factor in how supply chains work and can change to balance the need for food security and the need for climate security. Policy-makers, diplomats, politicians and others need access to a mix of evidence and expertise to facilitate the effective governance of highly interdependent systems. This demands a plurality of scientific expertise as well as expertise on infrastructures, supply chains, policy and funding. This transdisciplinary and transprofessional cooperation is needed to allow the science to work well in a highly complex and integrated geopolitical landscape. As a result, science diplomacy can help science operate with the necessary infrastructure, policies and funding in place for effective implementation. For example, in many African countries, dwarf crops are being used to prevent militants from hiding in large crop fields. These crops are also more efficient as they are less resource-intensive and engineered to withstand harsher environments.

Scientists can also help public and private organisations anticipate tipping points. The risk of irreversible collapse shows the need to prevent rather than react. So policy needs data on the scale, severity and time horizons of environmental collapses to create proactive and precise policy that provides targeted and timely support. Of course, the lack of precise probabilities makes precaution much more difficult. This shows the difficult trade-offs policy-makers may need to make regarding what data they can access and translate into actionable policy. Nevertheless, this work is critical for the continued survival and future thriving of humanity and the planet.

Indigenous and local knowledge

The Human Rights Council of the UN recognised the right to food security under climate change in 2022. The extent to which contemporary scientific and technological innovation towards food security takes account of existing local knowledge, and particularly knowledge relevant to science that is developed outside academia and private research labs, continues to be too limited. This is despite decades of efforts in that direction, for instance by the UN Food and Agriculture Organization and CGIAR (formerly the Consultative Group for International Agricultural Research). One critical component of such efforts is to empower women as both food producers and consumers to voice their experience and perspective in relation to scientific and technological developments and their prospective applications. Gender-responsive policy is needed both to foster women’s participation in the knowledge economy and to obtain more equitable food systems, given women’s key roles within them. Globally, women tend not to own land or capitalised farms, meaning they produce less and earn less. Several initiatives are designed to amplify women’s voices in the policy sector, incorporating various regional perspectives of women as food consumers and producers. Similarly, the voices of local producers, and particularly those based in subsumption-based economies, are not often incorporated or considered as part of large-scale innovation — and when they are, they tend to be extracted from their original context and exploited for product development, with no recognition or financial benefits for contributors. This is particularly critical in relation to Indigenous knowledge, where long-standing patterns of colonial exploitation stand in the way of equitable collaboration, recognition and rewards.

Behavioural change in the recognition and management of local and Indigenous knowledge is critical. This includes supply chains and consumption patterns, which need to foster sustainable farming and support good consumption habits in both environmental and health terms. A slow but necessary long-term goal is to cultivate good nutrition habits with more accessible information and easier access to a wide diversity of foods. Diversity in cultivation and eating habits is more likely to support good health, the appropriate use of land and soil, and the recognition of multiple forms of expertise and contribution to the food security system, thereby enhancing resilience and nutrition.

To this aim, the international question of food security continues to need large-scale, inclusive research projects that cross national and disciplinary boundaries, thereby employing researchers with an array of different backgrounds and nationalities to promote solutions that make use of local knowledge and local context. The recognition and fostering of different forms of knowledge and expertise can also build trust across stakeholders with more information and shared norms to enable more effective collaboration. This helps to promote scalable solutions. A trade-off between quantity and quality questions whether to prioritise the production of more food or of more nutritious and safe food. The global calorie supply should not neglect the micronutrient deficiencies that are commonplace in the majority world. The micro-content of food is under threat. Micro-deficiency has mostly disappeared in the minority world, but remains a major issue for the majority world. For instance, iron deficiencies harm brain development in newborns because the “golden window” of brain development in babies requires iron. Deficiency damages brain development in the third trimester. This has intergenerational impacts. Vitamin A deficiencies cause blindness and contribute to millions of deaths annually. The use of innovative crops bred to supply specific nutrients helps to address such a crisis, but only in a context where the long-term implications of such modifications to the environment are assessed, their safety approved and their interactions with the wider ecosystem and nutrition landscape (for humans and non-humans alike) monitored and regularly checked.

Opportunity Four: Safeguarding public health

This challenge outlines the potential risks and rewards of using emerging technologies to protect and promote public health.

Neurotechnology

Developments in neurotechnology are rapidly changing the frontiers of medical treatment. For instance, the use of AI to analyse brain-scan databases can help to correlate neurological age with potential health risks. Already, AI can make highly accurate predictions of the recovery trajectory days after a stroke. Transcranial temporal interference stimulation has shown potential to enhance learning and memory as well as treat neurological and psychiatric disorders.Technologies like electroencephalography headbands are being commercialised to improve sleep and brain health, which can improve access for marginalised and low-income groups. These emerging technologies are facilitating a more preventive approach to medicine, with easier and quicker monitoring to treat causes earlier before symptoms appear, and with less invasive interventions. Technologies that rely on invasive procedures risk complications and are often costly and therefore less accessible to many patients. The emergence of non-invasive technologies may reduce the medical risks and financial costs of medical intervention. To take advantage of this opportunity, medical systems need to shift focus to prevention in a systematic and sustainable way, which constitutes a significant effort at a time where many national medical systems are struggling to comply with demand from patients needing urgent treatments.

The development of neurotechnologies often relies on various narratives in healthcare, which have important implications for how the technologies are conceptualised and may be implemented. In particular, the development of neurotechnology is part of a neuropolitical climate that uses neuroscience and neurotechnology as the dominant lenses through which the brain is conceptualised and controlled. The development of neurotechnologies and greater capabilities of psychiatry risks “mission creep” in the sense of unduly expanding the scope of psychiatric practice over a wider range of human behaviours to the detriment of more qualitative, situational approaches such as those offered in psychotherapy and non-pharmacological forms of treatment. Conversely, the appearance of precision medicines able to target specific brain functions risks a “molecularisation” of mental processes and illnesses, thereby encouraging a reductive research paradigm that frames mental and emotional conditions in the primarily physical language of molecular biology — which in turn may overlook other factors at play with psychiatric issues, such as the wider social and economic factors that shape human behaviour. This raises the risk that the 20th-century “psychological complex” is being reproduced through a 21st-century “neurobiological complex”. To counter this trend, transdisciplinary collaboration that is highly sensitive to patient experiences and perspectives may offer the best opportunities to explicitly discuss among technologists, patients and various relevant medical practitioners what narratives neurotechnological developments and deployments can and should prioritise. Science diplomacy can help policy frameworks manage the difficult trade-offs between scientific concerns about data-driven medical research and technological development, and ethical concerns regarding the right to neurodata privacy and patient autonomy, and the risk of dual use and the potential militarisation of the mind.

Pandemic monitoring and prevention

Science is set to play a crucial role in future pandemic responses, nowhere as clearly demonstrated as during the COVID-19 pandemic. Many people have experienced the COVID-19 pandemic through models and numbers generated by epidemiologists, computer scientists, modellers and mathematicians, building on recent developments in data science, surveillance technologies and AI. The potential for digital technologies to revolutionise public health has thus come into public focus, with many governments investing in digital health surveillance and AI solutions despite the dramatic differences in individual access to digital services and measures for data protection. Data-sharing has increased in this scenario, with open-data infrastructures showing their worth by facilitating the discovery and study of new variants as well as of effective vaccination. Data-sharing has also taken a variety of different forms, ranging from full and unlimited access to controlled access under pre-specified conditions, and in some cases involving no access to data at all (such as in cases of “data visitation”, where a given algorithm is run through available data to analyse it, without requiring that the researchers involved see the actual data — to preserve privacy and the confidentiality of sensitive information).

At the same time, the COVID-19 pandemic has underscored the limitations of the global health apparatus and forced a generalised rethink of institutional arrangements and modes of action.There is an emerging consensus that the threats of global heating, zoonoses and racialised inequalities will need to be met by models of cooperation, equitable partnership and accountability that do not sustain exploitative logics of economic growth. There is increased advocacy for a just model of health that recognises the shared suffering of mixture of human and non-human actors, while acknowledging the claims for a healthy future made by generations of plurality lifeforms. The concept of planetary justice is gaining traction across multiple domains and could prove a vital framework for a new, reimagined global health agenda. Such an agenda might be premised on solidarities that reach across national, class, spatial and species divisions, acknowledge historical debts and affirm mutual interdependencies. Future health governance will need to integrate pandemic preparedness, racial justice, inequality and more-than-human life in a new architecture of global health.

Biorobotics

Biorobotics provides an interesting case of fusion between computing and biological research, which promises to result in hybrid organic-machine forms of life with wide-ranging utility and potential (for example, nanorobots able to clear arteries and biohybrid cells-machine structures capable of self-regulation). The emphasis on building hybrid mammalians capable of functioning outside laboratory environments and, eventually, of reproducing, may have wide-ranging ecological effects on ecosystems around the world, which need to be carefully discussed and monitored beyond the attention given to making such advances possible. These are also technologies of possible military interest (for example, in the case of insects re-engineered for surveillance). This makes the governance of such developments even more urgent, given the tendency to keep related research and development under secrecy to preserve security.

Shifting methods for evidenced-based medicine

The emergence of evidence-based medicine in the 1990s introduced a hierarchical understanding of biomedical evidence, within which different types of data are ranked as more or less reliable depending on the methods used to generate them. Observational data (including case reports and expert opinion) sit at the bottom of the ranking, while the outcomes of randomised controlled trials and related systematic reviews are hailed as the “gold standard” for high-quality, robust evidence. Newly expansive and increasingly inclusive data-intensive methods and tools, particularly in projects centred on the integration of data from multiple sources and concerning multiple phenomena, are affecting researchers’ conceptualisation of the environment in relation to health and disease. Data science is having a transformative effect on biomedical research, fostering significant changes in the evaluation of evidence, experimental methods and modelling practices. This has disrupted existing data siloes and related rankings, most obviously by expanding the boundaries of the health-data ecosystem to include new sources such as social media, citizen science, digitalised administrative and social services, and self-measuring devices, but also through novel forms of data governance and Al-led analytics capable of modelling data in real time and across scales. Implementing an AI-driven approach to biomedical discovery, for instance to identify biochemical compounds to lessen unpleasant symptoms or engineer gene products to treat hereditary disease, requires restructuring the research design and assessment, as well as safety standards, to ensure the new insights are appropriately supported and evaluated.

Investigating health in relation to environment interactions at scale

Notions such as “global health”, “one health” and “planetary health” have dominated epidemiological discourse in recent years, each advocating for a specific framing of what counts as environment and how it relates to human health. Planetary health has encouraged a more explicit focus on the physical environment that populations interact with, including climate and local ecosystems; “one health” has emphasised multispecies environments to understand co-dependences between human and non-human populations; and global health has highlighted different populations’ experience across contexts. These expansive conceptualisations of health, which share similar political and economic backgrounds, and the backing of national and transnational institutions, suggest a broad understanding of the scope and scale of environmental risk to humans. At the same time, the emergence of new measurement capabilities such as molecular markers has prompted a renewed and growing emphasis on the effects of environmental exposure at different scales on individual physiology and behaviour.Despite continuing challenges in the required multidisciplinary dialogue, the use and integration of new environmental data for epidemiology is playing a decisive role in fostering the integration of insights from climate and environmental research beyond existing reductionist leanings.Considerable threats to this vision continue to be: a weaponisation of biomedical innovation for military purposes, as when using vaccination studies to develop harmful pathogens; data loss on a vast scale, such as when important data sources are lost, mishandled or corrupted through inappropriate curation and lack of stewardship; and existing inequities in and across countries around who has the skills, resources and opportunity to develop and utilise novel medical technologies (and for which parts of the population).

Opportunity Five: Ocean science for health

This section outlines how emerging technologies may improve ocean health after various extractive practices have caused significant harm to coral health, fish populations and biodiversity. Ocean health is critical for human and planetary health. However, coral decline, deep-sea exploitation and the dramatic loss of biodiversity in our oceans have put the oceans at significant risk of reaching critical tipping points. A change in the geopolitical priorities of ocean use could facilitate technological developments to restore and protect ocean health rather than continue to enable extractive uses and exploitative practices from which the oceans (and the many societies depending on the ocean for their livelihoods) may not recover.

Coral decline

Coral reefs are needed to sustain various marine ecosystems and biodiversity. Corals serve as nurseries and breeding grounds for a large share of marine species and ocean life. This helps to support surface life that depends on the ocean for food and millions of people globally as a source of food and a source of income through tourism. Many corals globally promise to help protect and preserve various species, functions and ecosystems largely because of their higher thermal tolerance, which highlights their significance for conservation efforts. For instance, many researchers consider temperature-resilient corals in the Red Sea as highly significant for improving coral health elsewhere. The Red Sea houses approximately 5 per cent of the world’s coral reefs, with various regions listed as sites of outstanding universal value. The Red Sea might be seen as a “reef of hope”.

The increasing frequency and severity of stressors reduce the ability of coral reefs to bounce back. For instance, sea-surface temperatures are reaching new records. The acidification of the ocean and other stressors may interact with temperature changes and damage coral populations even more, and push reefs into different and less valuable configurations. Climate change is the most significant international threat to coral reefs, as rising temperatures push corals past their survival thresholds. Local stressors — often pollution from nearby human populations pursuing economic development — threaten their survival. In particular, the Red Sea is a relatively small body of water, meaning local pollution has regional consequences. While local pollution and environmental stressors harm corals, they can be prevented. Land reform may help to avoid development in fragile areas, especially the Caribbean, to give reefs time to recover. However, sustainable practices need local community involvement. This includes recruiting fishers, who know the reefs best, as reef stewards to guard and protect coral habitats, and supporting local fishing communities in transitioning to sustainable practices and transitioning to eco-tourism to reduce extractive activities.

There are many technology-based strategies for coral conservation. In practice, how different publics interact with and value reefs, especially the differences between scientific researchers and local communities, can directly influence coral reef health and significantly shape the feasibility of conservation efforts. This shows that social science and public collaboration are inseparable from effective technological and environmental interventions. One pressing priority is the need to buy time. Efforts to mitigate immediate stressors do not solve the wider problems but they do create temporary relief. These efforts can buy time between stress events, allowing reefs to recover and regenerate, and scientists, diplomats, policy-makers, politicians, activists and others to work on more long-term goals. For instance, geoengineering alters cloud coverage to reduce light stress on corals. Assisted evolution — cross-breeding corals with higher tolerance and growing and planting resilient coral species — accelerates evolutionary adaptations. “Reef banking” preserves small coral “pockets” as a source for future repopulation. Science diplomacy might help technological developments — large-scale assisted gene flow, assisted evolution, synthetic biology and habitat engineering — gain legal and political licence more quickly to help technological interventions protect reefs earlier. However, technological interventions risk unintended consequences. For instance, attempts to geoengineer ocean environments to sequester carbon and for other purposes threaten climate-microbe interactions in the ocean. Moreover, biological modifications are not a silver bullet. In practice, changes to fishing equipment and practices, and a reframing of goals, such as maintaining reef functions rather than specific species, may enable more effective interventions to protect coral reef health.

A major issue is that many conservation efforts are fragmented and compete against each other for funding. This duplication of efforts for conservation and for funding hinders efficacy and fractures trust. The risk of funding unethical conservation practices and of unethical sources of funding can further harm trust. For instance, funding often favours politically attractive rather than environmentally pressing conservation efforts. There is a risk of “greenwashing” that exploits ocean conservation for public image rather than real impact, as funding often leans towards attractive projects rather than addressing the most urgent priorities.

Deep-sea exploitation

Current commercial fishing practices often result in overfishing. A direct environmental consequence of overfishing is that it pushes fish populations below replacement levels. There are also significant indirect environmental consequences: it destroys other marine populations and their habitats, and it often pollutes the sea with various by-products, such as old fishing nets and discarded by-catch. This threatens ocean health as it changes and degrades the trophic structures of oceans with disrupted food chains. A commercial consequence of overfishing is that it becomes an unsustainable practice as it directly destroys the fish populations upon which the industry relies and indirectly destroys their habitats.

Deep-sea mining relies on emerging technologies to extract high-demand metals from the ocean. This risks causing significant harm to ocean health. For instance, deep-sea mining threatens the habitats of the newly discovered sea pangolin, a seemingly rare deep-sea species with a distinctive biology. More generally, deep-sea exploitation risks “dark extinction”, the loss of (potentially scientifically and socially significant) species before they are discovered.

Science-based interventions to improve ocean health face various challenges. From a commercial standpoint, fishing companies are often reluctant to comply with significant restrictions or adaptations to current fishing practices when the feasible alternatives are much less commercially attractive. From a scientific standpoint, the monitoring and enforcement costs of ocean policy frameworks are often very high. For instance, it is difficult to monitor the damage to marine populations and the ocean floor because ocean data is often highly incomplete. In practice, ocean science often has access to less funding than land science, but it relies on costly technologies. For example, data on deep-water fishing often relies on submersibles and remotely operated vehicles, and the remote and extreme conditions can significantly hinder data collection. As a result, significant data gaps about the state of fish populations and their habitats at particular times and across time can hinder the effective monitoring and enforcement of ocean policy.

Opportunity Six: Aligning the military use of science with international humanitarian law

This section will outline some of the significant challenges of militarising emerging technologies for national security.

Omni-use science

The “omni-use” problem is when civilian technologies are repurposed for unintended military uses. Emerging means of warfare are ever more integrated into broader technological systems which adapt and evolve beyond the military sphere, such as AI. This makes the lifecycle of these technologies complex to track and regulate, not least because they tend to move in and out of military use — for instance, when starting with military applications (a common occurrence for computing technologies, given the large investment in military research and development around the globe), finding new applications in civilian use (and becoming cheaper and more user-friendly in the process) and then being reappropriated by military actors (including those with less means). The panoply of related uses makes this into an omni-use, rather than simply a dual-use, technology.

There are clear and immediate risks arising from civilian use, especially concerning national security. For instance, technologies might be repurposed by state, criminal, terrorist or other organisations to intentionally or unintentionally harm civilians. In this sense, omni-use presents difficult trade-offs in practice. One strategy is to prioritise national security and rely on legal and policy frameworks to restrict unintended military uses of new civilian technologies, even if it may reduce technological development as a result. Nevertheless, legal and policy frameworks can only do so much. On the one hand, legal and policy frameworks might be too slow to adapt or to enforce effectively. For instance, the highly integrated nature of some omni-use technologies and the lack of independent oversight can significantly limit how well the internationalcommunity can monitor and regulate dual-use — and omni-use — technologies. Most recently, OpenAI was awarded a $200m contract with the US Department of Defense to develop prototype frontier AI capabilities to address critical national security challenges in both warfighting and enterprise domains. On the other hand, nation-states often do not wish to gain competitive disadvantages, either with less advantageous civilian technologies or with less advantageous military technologies. So science diplomacy provides a useful channel to manage omni-use trade-offs and may help to facilitate the safe and speedy development of civilian technologies beyond what legal and policy frameworks can realistically achieve.

Omni-use is a well-known issue for the life sciences, with research from molecular biology, virology, pathology and biomedicine proving essential to combating disease and pandemics, but also amenable to militarised uses including biowarfare, biocrime and bioterrorism. The monitoring and enforcement costs remain high. This situation can teach useful lessons for the engineering and computing sector in the development of general-purpose AI that clearly has potential for unwelcome forms of surveillance. A clear example is the extensive debate around facial-recognition technologies and whether they should be subject to a moratorium, while at the same time they are becoming ever more entrenched as a key form of self-identification and cybersecurity. The temptation to further develop the potential of such technology without considering the risks of abuse and harm to citizens needs to be countered by appropriate forms of engagement and regulatory measures.

Boomerang effects

The boomerang effect is when military technologies are repurposed for unintended domestic uses. In other words, what is made technologically feasible abroad during war today becomes a technological reality back home tomorrow. Since technologies used abroad during war operate under different legal and policy frameworks, the boomerang effect poses significant legal and political challenges when such technologies are repurposed for domestic use. This highlights that the development of military technologies has potentially significant long-term domestic consequences.

The boomerang effect can take various causal pathways. Most straightforwardly, the development of new military technologies creates new physical capital. Once governments pay the high upfront costs of technological innovation to control foreign populations more effectively, it becomes much cheaper for public and private organisations to reproduce such technologies to exercise similar controls over domestic populations. For example, when governments invest in defence-related research and development, and procurement, they unintentionally but foreseeably reduce the cost of surveillance technologies, equipment and weapons for domestic use. Most recently, military operations increasingly rely on AI to inform human judgement during war, with the ability to process data at speed to help plan, target and execute operations faster and more effectively. In return, this raises the risk of enhanced domestic surveillance, more militarised styles of policing and militarised AI at home.

Another causal pathway for the boomerang effect is that the development of new military technologies creates new human capital. It creates new types of specialised personnel with distinctive skills and dispositions. For instance, specialised personnel become confident in the ability of military planning and technology to manage complex social dynamics effectively. Moreover, they are willing to use a multiplicity of means of control on foreign populations to pursue military objectives which domestic legal and policy frameworks would prohibit in domestic settings.

A third causal pathway is that new military technologies create new organisational dynamics.The distinctive human capital of specialised personnel often gives them a competitive advantage when they enter public or private organisations in the domestic defence sector. For instance, retired generals and admirals typically work for private defence contractors and consultants. Their reputations for using military technologies to control foreign populations effectively allow them to reproduce similar controls over domestic populations with similar technologies. As a result, the new physical and human capital that new military technologiesproduce reshape how public and private organisations interact with domestic populations.

The boomerang effect highlights the potential long-term domestic consequences of developing military technologies. This foregrounds the need to balance the expected military benefits of developing new military technologies and the potential domestic risks that the new physical capital, human capital and organisational dynamics may pose if and when such technologies are repurposed for domestic use.

Conclusions

The aspiration for global access to science raises the question of how the global and local governance of scientific research and technological developments may become sensitive to the variety of circumstances within which such research and technology will be deployed. With deeply uncertain futures on the horizon, an anticipatory type of policy framework can proactively cultivate a diverse research ecosystem that is better prepared for whatever social changes future science might bring. A diverse research ecosystem already contains a web of judicious relationships among a range of public and private organisations (including government agencies, public universities, think tanks, civil organisations, private investors, private corporations and international organisations) with easy access to robust, resilient and responsible knowledge and expertise. In particular, a diverse research ecosystem can cultivate transdisciplinary and international knowledge production across the research ecosystem to produce scientifically reliable and socially responsible scientific research. Hence a diverse research ecosystem is better prepared for uncertainty: it already contains a range of tried-and-tested research practices that can adapt quickly and competently to unexpected scientific and technological changes. As a result, policy-makers, diplomats, politicians and others can gain easier and earlier access to the knowledge, expertise and relationships critical for the quick and competent global and local governance of scientific and technological changes. In return, global and local governance can more easily cultivate scientific and technological developments that meet the evolving social needs of different publics as those publics see them. Without the right type of research ecosystem in place, it becomes harder for policy-makers, diplomats, politicians and others to access the knowledge, expertise and relationships they need to design effective policy interventions and implement them effectively. They lack the knowledge and expertise — and the relationships to gain the knowledge and expertise — to discover how emerging technological developments interact with complex social settings and how policy may anticipate and adapt accordingly.

An anticipatory type of science diplomacy can help to manage the friction between the need for stable policy frameworks and the potential for rapid technological change. Stable policy frameworks are critical to provide the certainty that public and private organisations need to operate effectively. However, the freedom to experiment and disrupt is also critical for technological innovation. In response, a robust research ecosystem — with robust research infrastructures and robust human skills — is critical to accommodate multiple data needs that may change across research communities and in research communities over time. In practice, the infrastructures and skills that global and local governance climates make available both shape and limit the innovations that public and private organisations can pursue and prioritise. So greater investment in greater access to robust infrastructures and skills is critical for inclusive technological development that closes the digital divide and promotes inclusive development that can meet different social needs across different social contexts. As a result, greater access to robust infrastructures and skills gives public and private organisations more certainty to innovate effectively, with the freedom to experiment and disrupt without the available infrastructures and skills significantly limiting them.

A widespread public distrust in science has made the trustworthiness of science a pressing policy issue. Both effective policy design and effective policy implementation largely rely on trust among a plurality of stakeholders. So science diplomacy can aspire to cultivate new narratives about science-based politics and technological innovations that might help to build social trust and resist the sensationalised narratives, framing science and technology as socially harmful forces, that dominate public life. Similarly, private organisations can build trust with various stakeholders in addition to shareholders, including customers, employees and local communities, with a shared vision of the public interest to operate effectively across industries and regions. The major value of science is not in a naively optimistic certainty in research outputs that encourages public and private organisations to complacently “follow the science”. The major value of science is in the process of discovery and the needed scepticism it cultivates. Science provides a useful model of how to govern disagreement, with its deference to evidence and its openness to criticism and correction from diverse perspectives. The aim is not primarily substantive agreement but productive discourse. So whatever specific scientific outputs the public might distrust, science diplomacy can aspire to cultivate trust in the ability of the scientific practices in public and private organisations to govern disagreement well with a general culture of open inquiry that seeks evidence and remains open to criticism and correction from opposing viewpoints.

While science is clearly useful for diplomacy, the reverse is also true. From an economic standpoint, diplomacy can help to make a range of more effective business models feasible. For instance, diplomacy can help to open up new consumer markets for business models supporting disadvantaged and marginalised communities across the world. It can also foster new supply chains and market accessibility for a mixture of public and private organisations. From a political standpoint, diplomacy can help to build trust among nation-states. This type of diplomatic work is critical for an open research community to reap the rewards of international collaboration. In return, the rewards of a more open international research community may help diplomacy to support and solidify trust among nation-states.

The international research community can promote inclusive scientific research and technological developments that many scientific and social communities across the world find useful, whatever their specific situations might be. In the context of ever more fragmented geopolitics, rapid technological developments continue to present the international research community with great social and economic opportunities to seize, and with new political trade-offs to manage regarding national security, human rights and other socio-political concerns. At a minimum, international agreements can strive to prevent the worst-case scenarios and deter the many misuses of science that would leave the world worse off. More aspirationally, science diplomacy can continue to cultivate a climate of trust and confidence among various nation-states, private corporations and international organisations critical to the production of scientific knowledge, facilitating the collaborative efforts that promise to make the future advantages of science accessible to all.

Acknowledgments

We thank Sophie Gilbert and Martin Müller at GESDA for helpful discussions and comments, GESDA for the collaboration and fruitful exchanges, and the European Research Council under the European Union’s Horizon 2020 research and innovation programme for supporting the research that grounded our insights as reported in this text [Grant Agreement No. 101001145]. For insightful feedback on previous drafts, we thank: our research community in the Frontiers in Open Research seminar series, especially Elis Jones, Leander Müller and Michael Stoeltzner;our colleagues on the PHIL_OS research project, especially Rachel Ankeny, Paola Castanõ, Joyce Koranteng-Acquah, Desantila Hysa, Nathanael Sheehan and Fotis Tsiroukis; and the Ethical Data Initiative, especially Kim Hajek and Paul Trauttsmandorf. Any errors remain our responsibility. The content reflects only the authors’ views, and GESDA and the ERC are not responsible for our interpretation and for any use that may be made of the information here presented.