Responsible Anticipation
Comment
Stakeholder Type

Responsible Anticipation

Invited Contribution:

Responsible Anticipation

We live in times of great acceleration in science and technology. This acceleration promises breakthroughs with transformative impacts on human life and the planet. Anticipating these breakthroughs is critical to ensuring that we can harness the most benefits from them. Is there anything the scientific community can contribute to this process besides the creativity and intensity needed to achieve these breakthroughs?

In fact, more is required by the human right to benefit from science — a right that receives relatively little attention but which is nonetheless codified by Article 15 of the International Covenant on Economic, Social and Cultural Rights. Under this article, the public has the right to enjoy the fruits of science, while signatory states must foster conditions under which scientific research is conducted freely and its benefits diffused widely.

The quid pro quo for this is that the scientific community should practice responsible anticipation. It should not simply take progress for granted. Rather, it should anticipate its own trajectory, reflect on its impacts, and, if needed, steer scientific progress in the direction most beneficial for society. This is resource-intensive, intellectually and emotionally demanding, and driven by constantly changing needs. But it is indispensable to ensure that scientific progress serves the welfare of humankind and contributes to protecting life on the planet.

To anticipate responsibly, scientists must engage, individually and collectively, in three activities: contemplation, control, and communication.

Contemplation is a self-reflective exercise of the scientific community when called on to anticipate the impacts of research. Contemplation connects present actions with future impacts. The future is the result of what happens in the present, and so the goal is to evaluate today’s actions in light of what is anticipated, to ensure a desirable future in which science benefits humanity to the full extent possible. To be comprehensive and adequate, contemplation requires integrating two complementary approaches: prospective and retrospective anticipation.

Prospective anticipation relies on foresight tools to create a pathway to a desirable future. This form of anticipation deploys linear and systemic thinking to predict and/or guide what science and technology will make possible in the future. Typical tools include trend reports, horizon scans, scenario planning or pathway development.

Retrospective anticipation deploys the power of creativity to generate utopian and dystopian futures — to make the “unthinkable” thinkable. These “unthinkable” futures serve as the basis for reflecting on present practices, critically assessing foresight and its assumptions, and generating alternative worldviews of how science and technology can shape human civilisation’s future. This perspective creates discontinuity. By creating a gap between the present and future, retrospective anticipation aims to reshape mindsets and change how scientific (and social) progress is perceived and addressed in the present.

Control: When contemplation indicates that the present trajectory of scientific progress must be corrected to achieve desirable future outcomes, the scientific community must activate and manage that trajectory. The most effective way to achieve this result is self-regulation, which refers to self-imposed constraints on scientific freedom. Self-regulation is led by scientists and directed toward scientists, although it may be influenced by external factors — for instance, by a pressing social need. It is independent of political institutions and the legal system, although it may be reinforced by outside forces in the form of laws and regulations.

Examples of self-regulatory controls are guidelines identifying standards and best practices; statements of principles of conduct; norms of conduct within collective arrangements such as research consortia; and outright moratoria on contentious areas of research.

The biomedical field offers several successful examples of self-regulation. The Bermuda Principles, mandating that all DNA sequence data be released in publicly accessible databases within twenty-four hours after generation, marked a departure from a tradition of making experimental data available only after publication and have become the norm in the field of genomics. The 1975 Asilomar Conference on Recombinant DNA, which convened 153 molecular biologists, sixteen journalists, and four lawyers to discuss the future of research and applications of recombinant DNA technology, produced a risk-based typology of research activities that is still the blueprint for these kinds of experiments, and led to a moratorium on riskier research pending the construction of facilities where this could be conducted safely. The International Summit on Human Gene Editing, which convened scientists to define a responsible pathway to experimenting with germline gene editing on humans, concluded its third edition with an organisers’ statement that these experiments are premature because appropriate governance is still insufficient.

Communication: Finally, responsible anticipation demands that scientists explain to the public and policymakers why and how they have decided to manage the trajectory of scientific progress. Communication must be broad in scope, including information on the anticipatory processes and approaches used to generate ideas and the need for self-regulation, the values underlying self-regulatory processes and outcomes, and a vision of the next steps of responsible anticipation.

The goal is to inform nonscientists of the outcome of self-reflective and self-regulatory practices, to foster trust in the scientific community and science, and to empower citizens and policymakers with knowledge that can serve as the basis for formulating policies that balance risks and opportunities of scientific progress, while advancing human rights and promoting intergenerational justice.

Responsible anticipation is successfully practiced in the biomedical field because it is an area of science which very quickly comes into contact with societal values, opening up scenarios which could quickly challenge the collective understanding of what it means to be “human.” Scientists in the biomedical field operate in a highly regulated environments, with a long tradition of professional ethics dating back to the Hippocratic Oath at the origins of medicine and experimentation with the human body.

The achievements of responsible anticipation have thus far been less significant in other, less historically conditioned, fields of science. In the fields of AI and geoengineering, for example, governance has developed in regulatory environments that are much less structured than in biomedicine. In AI or geoengineering, scientists are thus freer to choose different paths, and responsible anticipation is therefore more challenging to coordinate. This is evidenced by the proliferation of self-regulatory pronouncements and statements: according to AlgorithmWatch, 167 self-regulatory statements have been produced in AI governance, while Herzog and Parson identified 18 calls for moratoria, including research moratoria, on climate engineering. Paradoxically, the greatest need for fostering a culture and practice of responsible anticipation exists in these less-regulated fields.

As responsible anticipation becomes more prevalent, we must ensure that scientists commit time and resources, educate themselves about anticipation, and integrate anticipatory practices into their ways of doing science, both in the public and private sectors. Society and governments must support it, commit to taking it seriously and appreciate the value of the scientific community’s efforts to define paths to scientific progress that prioritise the benefits for humanity.

More thinking must also go into enriching the toolbox to monitor and foster compliance with responsible anticipation. It would be illusory and unrealistic to expect that responsible anticipation prevents all “bad behaviour” among scientists. This is not the goal of anticipatory practices. Their function is to bring scientists together, engage them in a conversation about future impacts, set standards of conduct, and generate insights based on their expertise valuable to society and policymakers to carry on a broader conversation about scientific progress.

Ultimately, to be effective and respectful of human rights, the regulation of innovation does not rest in the hands of a single social actor, but it can only emerge from collective action. Focusing on responsible anticipation means organising an essential piece of that process. If scientists do not take it seriously and society also does not believe in its value, a critical opportunity to steer innovation towards a path beneficial for all is missed.

Further reading

AlgorithmWatch, ‘AI Ethics Guidelines Global Inventory’, https://inventory.algorithmwatch.org/database.

Andrea Boggio, ‘Anticipation in the biosciences and the human right to science’ (2023), https://ssrn.com/abstract=4522373.

Megan Herzog and Edward (Ted) A. Parson, ‘Moratoria for Global Governance and Contested Technology: The Case of Climate Engineering,’ UCLA School of Law, Public Law Research Paper No. 16-17 (2016), http://dx.doi.org/10.2139/ssrn.2763378.

Gary Marchant and Carlos Ignacio Gutierrez, ‘Soft Law 2.0: An Agile and Effective Governance Approach for Artificial Intelligence,’ Minnesota Journal of Law, Science & Technology Minnesota 24: 375 (2023), https://ssrn.com/abstract=4473812.

Andy Parker, ‘Governing solar geoengineering research as it leaves the laboratory’, Philosophical Transactions of the Royal Society A: Mathematical, Physical and Engineering Sciences 372: 20140173 (2014), https://scholarship.law.umn.edu/mjlst/vol24/iss2/4.

Sebastian M. Pfotenhauer et al., ‘Mobilizing the private sector for responsible innovation in neurotechnology,’ Nature Biotechnology 39: 661 (2021), https://doi.org/10.1038/s41587-021-00947-y.

Anna Su, ‘The Promise and Perils of International Human Rights Law for AI Governance,’ Law, Technology and Humans 4: 166 (2022), https://doi.org/10.5204/lthj.2332.