Use the future to build the present
Debate 1: The Philosophical Compass – Three Questions for Tomorrow
Comment
Stakeholder Type
,
1Quantum Revolution& Advanced AI2HumanAugmentation3Eco-Regeneration& Geo-Engineering4Science& Diplomacy1.11.21.31.42.12.22.32.43.13.23.33.43.54.14.24.34.44.5HIGHEST ANTICIPATIONPOTENTIALAdvancedArtificial IntelligenceQuantumTechnologiesBrain-inspiredComputingBiologicalComputingCognitiveEnhancementHuman Applications of Genetic EngineeringRadical HealthExtensionConsciousnessAugmentation DecarbonisationWorldSimulationFuture FoodSystemsSpaceResourcesOceanStewardshipComplex Systems forSocial EnhancementScience-basedDiplomacyInnovationsin EducationSustainableEconomicsCollaborativeScience Diplomacy
1Quantum Revolution& Advanced AI2HumanAugmentation3Eco-Regeneration& Geo-Engineering4Science& Diplomacy1.11.21.31.42.12.22.32.43.13.23.33.43.54.14.24.34.44.5HIGHEST ANTICIPATIONPOTENTIALAdvancedArtificial IntelligenceQuantumTechnologiesBrain-inspiredComputingBiologicalComputingCognitiveEnhancementHuman Applications of Genetic EngineeringRadical HealthExtensionConsciousnessAugmentation DecarbonisationWorldSimulationFuture FoodSystemsSpaceResourcesOceanStewardshipComplex Systems forSocial EnhancementScience-basedDiplomacyInnovationsin EducationSustainableEconomicsCollaborativeScience Diplomacy

Debate 1:

The Philosophical Compass

Three Questions for Tomorrow

The vision of GESDA — to use the future to build the present — and the anticipation of scientific breakthroughs at five, ten and 25 years raise a series of essential questions, ultimately for an improved human experience of life, and on a healthy planet. Answering those questions will allow us to set our compass in the right direction as we move from anticipation to translation into initiatives in the concrete world.

This demands more than a single debate; it requires continuous interactions with key thinkers from philosophy, history, anthropology, sociology and related disciplines, as well as an engagement with the broader public.

Use the future to build the present

During the past year, GESDA has consulted global leading scientists in their field about which scientific advances in the coming 5, 10 and 25 years will have a strong impact on people, society and the planet This anticipatory mapping has involved more than 400 scientists over twenty topics, including social and human sciences, as the trend section of the report will show.

While the anticipatory mapping provides a vision about what scientists think about future advances in their disciplines, the broader meaning of those developments and their implications for the fundamental questions facing humanity, which are part of GESDA Roadmap 2020—2022, have not yet been elaborated.

The meaning of what makes us human, how we live together, and our relation to the planet are at the core of GESDA.

Using the future to build the present in a way that benefits humanity calls on our ability to respond to these three questions. As GESDA moves from science anticipation into concrete solutions for the benefits of humankind, addressing the fundamental and existential questions of “using the future to build the present” becomes critical. Akin to a “philosophical compass”, these notions will equip GESDA with the tools to guide its actions towards positive outcomes. GESDA initiated a first dialogue in July 2021 on the three questions with a series of key thinkers. We provide a below some impressions of the discussions as quotes, which will be enriched as we progress.

1. Who are we, as humans?

New scientific discoveries are radically changing the nature of how we perceive ourselves as human beings. Advanced synthetic biology and gene-editing techniques have the potential to modify the biological fabric of our bodies. Advances in cognitive neurosciences, brain-machine interfaces and neural technologies may provide access to our inner thoughts in the near future and allow others to steer our behaviours. The power of quantum technologies and advanced artificial intelligence might provide new understanding of conscience and the origins of life.

What does it mean to be human in the age of robots, gene editing and augmented reality?


Mark Hunyadi

Professor for Moral, Social and Political PhilosophyUniversité Catholique de Louvain

“Homo technicus”. There is one thing that is extraordinarily striking under the effect of technological development, namely that until very recently, philosophy was obsessed, one can say, by the question of distinction between man and animal. Each philosopher has his own contribution; for Rousseau, for example, it is freedom that distinguishes; for Heidegger, the fact of having what he calls a world. And under the effect of developments in artificial intelligence, robotics too, the question really arises of the distinction between man and machine, a kind of displacement of the question. In general, I find our scientists much too fascinated by the power of machines, and insufficiently surprised by human intelligence, in particular where it is most immediately visible, that is in children. I find that we are reasoning the other way around, that is to say that we are fascinated by this computational capacity of machines. Everyone can see that the machine does things that the human mind cannot do. Suddenly, that gives a kind of extraordinary privilege to this function, let's say of calculation or computation, but we are not surprised by the questions of a child, by the way a child apprehends the world. To all these new algorithms we give millions of images to distinguish a cat from a dog. But what fascinates me is that once you show a cat to a child, not only does it recognise other cats and distinguishes it from dogs, but it even recognises the symbolic forms of a cat, for example Gelluck's cat. I think that if we retuned our astonishment a bit we would get another image, maybe humbler on the side of artificial intelligence and more admiration of human intelligence. [Everyone] is fascinated by computing power. And when we tell [the scientists] but look at the child like he can not only tell, but summarise a story — [they reply] yes, but artificial intelligence will be able to do it. I don't think so at all. The summary of a story is not a question of calculating words, or of identifying the number of occurrences of words. It is a question of being able to give a meaning to a narrative sequence of events, and that is something totally different. I think that we are in the false paradigm with the paradigm of calculation. Questions of meaning are so important for childhood. These questions of meaning are extraordinarily important to humans in general and they are simply not reducible to calculation. A crane — this is a somewhat brutal parallel of course — lifts weights that a human body can never lift. This does not mean that we find it intelligent. In the same way, basically, this extraordinary computing power that machines have, that does not make them intelligent for all that. From a more ethical point of view, what I fear for the future of humanity, since it is a bit your question and it is mine too, is that by dint of constantly valuing calculation, what will count [in the end] is calculation, neglecting questions of meaning. My fear is that little by little, we will come to eliminate the human being himself, his dispositions and his aptitudes, precisely the search for meaning, imagination, spontaneous creativity.

If what matters is what you can count, then everything you cannot count will gradually be devalued. When you make machines, when you make a world of machines and algorithms, you don't just make machines, you make the humanity that goes with them. We robotise humanity, and it becomes basically a kind of partner of machines as you have to go through machines all the time. And that is something completely new in the history of technology, that is to say in the history of mankind. [..] Human beings have always been technical animals, but there is something quite new happening right now before our eyes. The great thinkers of technology like Marx or Hannah Arendt, described the tools as being an intermediary between me and nature. Something that prolongs my body. A little later with people like Jacques Ellul, for example, we think of technique as a system. For Leroi-Gourhan, paleontologist, it was an environment. […] But today, we are still in something else. This means that now, it is technology via digital technology, that allows us, in general, to have access to the world. The digital becomes the obligatory mediation. Not just an intermediary who helps us. It becomes compulsory mediation. Tendentially, — let's not exaggerate — it means that we are all the time obeying machines. We are in a real dependence on machines; it implies a very asymmetrical dependence, since these machines are programmed by someone. It gives the designers of these machines quite exorbitant power over how we can access the world. As I usually say: Obeying machines makes us obedient machines.


Monique Canto-Sperber

Former Director of the École Normale SupérieureRépublique des Savoirs

The perspective of an augmented human being. Here are some issues to think about in relation to the prospect of a cognitively augmented human being: would such an augmentation only amplify already formed capacities or could it give these capacities from the start without the need to acquire them? In this second case, the question of the consequences for the definition of what is a human being are quite different.

Let us take the example of learning, the necessary acquisition process of knowledge and skills. If knowledge and skills were immediately available, without any learning, the following issues would need to be further explored:

  • Learning is a process of self-transformation, which is necessary for the human body to adapt to its environment. If this process no longer existed, what would be the consequences?
  • Learning implies repetition and therefore an experience of duration which has an impact on the functioning of the brain (exercise of patience, learning of the temporal dimension, familiarisation with the time required to complete a project). In this way, it opens up a range of other related competences: would the disappearance of learning lead to the weakening of these competences?
  • Learning responds to a desire and an effort*, in other words, it enables the mobilisation of a set of motivational dispositions that allow the individual to orientate himself towards the world and to act to transform it. If there were no more learning, what would be left of these dispositions?

This simple example about the learning process, formulated without any moral considerations (it is not a question of knowing whether a augmented human would be better or worse), shows what is at stake. Augmented humans could represent a considerable gain for humanity, but there is the risk that this augmentation goes hand in hand with the attenuation, if not the disappearance, of a set of cognitive or motivational capacities which play a major role in the interactions of a human being with the world and with his kind.

The same reflections could be made about our capacity for imagination, or about emotions. Current research in cognitive psychology tends to show that this capacity plays a significant role in the process of knowledge, discovery, and innovation. But these emotions are also linked to desire and dissatisfaction, including about knowledge: would the immediate accessibility to knowledge through cognitive augmentation allowed to free the imagination and innovation resources of human beings? Or would exempting human beings from the experience of dissatisfaction, trial and error and progress deprive them of the cognitive resources necessary for innovation?


Wendell Wallach

ScholarYale University Interdisciplinary Centre for Bioethics

There are all kinds of technologies being predicted, all kinds of areas being worked on. We don't know when breakthroughs will be made in these various realms of research or in what order. And as these breakthroughs are made, we don't know how they will synergistically interact. Therefore, it's very hard to anticipate exactly what the near and longer-term challenges are going to be. This places us in a difficult position in the science policy dialogue about how to shape technological development through our actions in the present. The philosophical and institutional structures that we put in place must ensure that our policies are well thought out, that we are prepared for unanticipated events, and that we have at least the right values or the right approaches for factoring in ethical considerations. Those ethical considerations must acknowledge the benefits of the various innovations, but also lend a cursory eye to the undesired societal impacts and risks of allowing certain technologies to move forward.

On the philosophical front, there are ontological, epistemological, and of course meta-ethical and metaphysical considerations that come into this play. For example, is the machine/human distinction really helpful or not? We have been using machines as an imperfect mirror to understand what it means to be human and the ways we are similar to or truly differ from those artificial entities we create — whether these are intelligent machines, whether they integrate biological and human material, or whether they are enhanced humans. Are the ontological categories we have helpful, or do we need to be thinking in new ontological categories? What should be the rights of artificial entities, whether intelligent machines or enhanced humans? What's the ontological status of ecosystems or of the planetary environment? What kind of say do, or should, they have in the decisions we make? What is the ontological and legal status of animals, particularly those that seem to show cognitive faculties? And I don't just mean great apes, but also species whose cognitive capabilities differ from or evolved differently from ours. There's a whole vast realm of cognitive capabilities that nature has created and from which we can learn. That all needs to be factored into how we forge a pathway forward.

Then there are the epistemological questions. What counts as knowledge? Is there a prevailing scientific narrative and scientific framings that at times runs roughshod over other forms of knowledge? And if so, as I believe there is, what are these other forms of knowledge and understanding and how can they be elucidated?

We are in the midst of what Jürgen Habermas referred to as a delegitimisation crisis where the public loses faith in its governments and its institutions to solve their problems. Self-driving cars provide an apt metaphor for the Information Age — technology is moving into the driver's seat as a primary determinant of human history. [..]. I'm pro science, but I'm also someone who is perhaps best known for underscoring the dangers of various technologies whose goals, risks, and tradeoffs have not been considered comprehensively. With these words, I hope to underscore the vastness of the subject matter and the fact that even people like myself who have been submerged in it in the most transdisciplinary of ways, only have an inkling of what's going on or what all needs to be addressed. [..]. This complexity points to one very simple fact — the notion of intelligence as being the property of any entity, either artificial or a human, misses the reality that intelligence is collective and participatory.


Eric Salobir

PresidentOPTIC Network

The human person is defined by interactions with others: Where I think it is interesting, it is precisely that the great strength of our species has been this capacity to renew itself and to transform itself through genetic evolution, through the mixing of genes. As we can see, it is not by chance that, in the end, the prohibition of incest is not just a psychological prohibition of access to origins. It is also a prohibition which has a biological dimension. We are ourselves and we only flourish as ourselves with the other. I need the genes of the other to perpetuate myself. I need the difference. It also actually means that human beings develop in interaction with others. The work of the Canadian Charles Taylor on the doubles speaks to this: “The two sources of the self” or the two sources of our identity. This means that there is a part of my identity that comes from me. There is also a part of my identity that comes to me from the other. From the moment that technology blocks or transforms the relation with the other, it will transform me. There will be a mediation of this technology on the constitution of my identity through the other, which, suddenly, will transform me. The difficulty that I see when I speak about the emergence of a human person, is that it is not only that we are enriched by discussions with others, but also because it is the confrontation with the other that moves me and allows me to transcend myself. This will not happen with a machine that is deliberately not made to transform the individual, which therefore risks remaining raw.

Also, consider the very far-off scenario of the possibility of interacting with an AI that would have a form of self-consciousness or a form of intelligence (or the illusion of intelligence) to a point where the AI system becomes a part of your life at the same level as a relationship with a human person. After a while, the problem with AI is that, by definition, as it is made to meet our individual needs, these interactions can become interactions that are exclusive to everything else. And here too, we are in a scenario where we could gradually see a human being find fulfilment in their digital twin or their digital clone. Look at yourself in this kind of mirror. A kind of self-fascination for two, but which, in the end, can be a rather complex relationship. In Japan, there is a term for it that, pathologically, those who no longer live only in technology or go out no longer. It could become a pretty common lifestyle model, but one that, in my opinion, does not allow the complete emergence of the person, and will not result in completely structured humans.

Human agency and free will: In this context, for me, the question is what it is to be human in the age of artificial intelligence? The question I am asking myself is: is this transformation which is underway and for which we do not yet have a great readability — is this a transformation that will bring more sense to human life? Or will it lead to a form of dehumanisation? But I think the key elements are those around human agency, the capacity for action and free will. It is interesting to think that a certain number of digital technologies, are made to make our lives easier. You no longer choose the temperature in your car, you no longer choose the pieces of music you would like to listen to, or your playlist. We have more and more machines that handles all of this for you. As we get rid of all these little choices that poison our daily lives, do we liberate ourselves to ask ourselves the big questions? Or, on the contrary, because we do not need to make small decisions on a daily basis anymore, do we risk, at some point, reducing our capacity for free will? I would make an analogy. We know that people who use their GPS extensively can gradually lose their sense of orientation. After a while, when they lose access to the machine, they aren't really able to orient themselves anymore. And so, their perception of their position in space, of their relation to space, is changed. There is an almost phenomenological dimension in the relationship we have to the space around us, which is evolving quite simply because we are more used to moving around than having to orient ourselves. We are guided and do not have to ask questions. As we go from point A to point B, we are taken from point A to point B. Is the same thing going to happen with our capacity for action and our capacity for decision? Or does this facilitation, on the contrary, free our minds for the most important topics?

2. How can we all live together?

New advances will not only modify our perception of ourselves but our relationship to others. Research on ageing might extend the lifespan of people and raise new questions about equitable access to science or the intergenerational contract of current societies. Advanced AI might lead to further automatisation and fundamentally change the nature of labour and political participation. Advanced biology combined with quantum computing might provide solutions to poor health or hunger. Well thought out deployment of technology can help reduce inequality and foster inclusive development and well-being. Which deployment of technology can help reduce inequality and foster inclusive development and well-being?


Monique Canto-Sperber

Former Director of the École Normale SupérieureRépublique des Savoirs

A second example (about augmented human being) relates to interactions with others. The need of the other is fed by the inability to be self-sufficient. Human beings become aware of their interdependence very early on, precisely because there are things they do not know and cannot do. An increase in human capabilities that would give raise to a certain form of omnipotence in all areas would lead to ignoring the need for the other. Without this basic need, the political community loses its raison d’être. And when it loses its raison d’être, so does the raison d’être of the modes of regulation that are laws and norms. Such a state of affairs could lead to a permanent state of war, which only an extremely authoritarian power would be able to pacify in some way from the outside, as a form of Leviathan. On the other hand, it is also possible to imagine that an augmented human being would have extremely developed emotional capacities and an increased ability to identify with others.

A last question is whether human augmentation would lead to the equalisation of all. It is a question that is seldomly asked, but which is absolutely fundamental. If there were systems of amplification and augmentation (of the human) of this importance, which would affect the cognitive processes themselves, human capacities would tend to become equal for all individuals. We have never been in this kind of society.


Mark Hunyadi

Professor for Moral, Social and Political PhilosophyUniversité Catholique de Louvain

“A gloomy vision of the future of humans”. Will there be an awareness of humanity that allows us to take back control? With the grip of digital, everything is done to prevent this awareness. This is what worries me. The digital works on a principle that I call libidinal, it works on the pleasure and the satisfactions it provides, in leisure and also in professional life. It's everywhere, it's intuitive, it's fast and efficient. This libidinal principle fragments us into individuals, into profiles. This goes against the grain of any collective or political awareness. The degree of satisfaction that digital technology brings is so high that it actually prevents any criticism. If we look at great societal changes historically, they were made when people were suffering. (I'm simplifying): people said stop, we're going to beat the regime or start a revolution. But here now no one is suffering. It happens again thanks to our own pleasure, by our passive adhesion. But the situation is catastrophic, but not hopeless. Why? Because I see an increasing awareness, through my teaching for instance.

There's already a world between my freshmen ten years ago and my freshmen now: they're a lot more aware of what Facebook is doing to them, how it works. I also think that across the world, there are a lot of initiatives, very local, but they are done to reappropriate things, to turn digital technologies a bit, in a way that is a little more collective, creative, free, etc. There is individual awareness around the world, and actions of reappropriation, in a way. What gives me hope is that the awareness is there. Actions do exist. I think what these two layers lack is a third floor, an institutional floor. What is missing is a kind of UN of technologies. I do not clearly have the solution. But we can take an example from bioethics. In the 1980s, when we saw that genetic engineering was going to take on an exorbitant power, we quickly created institutions in all countries. I am not desperate, but as Gramsci said “I am a pessimist of intelligence, but an optimist of will.”


Wendell Wallach

ScholarYale University Interdisciplinary Centre for Bioethics

“The battle for what will be the prevailing narrative that informs science policy over the next decade is presently between the Transhumanists, who emphasise continuing evolution by technological means, and advocates for the SDGs and work on the principles of A.I., whose emphasis is upon an inclusive future. The proponents of these narratives understand that we are at an inflection point in human history, and at present still have the opportunity to put down markers as to what value should inform our decisions going forward. The Enlightenment ethos is imploding on its own success and there is need for an Enlightenment 2.0 or perhaps a totally new ethos. If reason and science are going to prevail at all, that new ethos must recognise the interdependence of nature and culture and the contributory role of each and every entity on the planet. We have just begun the process of forging such an ethos. It will need to be complemented by inclusive forms of governance, which provide meaningful opportunities to prevail over the many challenges impinging upon humanity’s future.


John R. McNeill

Professor for Environmental HistoryGeorgetown University

Which technologies can reduce inequality? My answer to that is: simple technologies that are easy to operate and to maintain. One of the democratising technologies that historians are familiar with are iron tools and iron weapons, three thousand years ago. Some people say that today's cell phones and social media have a democratising effect. That may be, although I think it's too soon to say with respect to communications technologies. I think there is a long-term historical pattern whereby new technologies upon introduction do have a democratising effect; however, over time, and it may be decades, maybe centuries, that democratising effect is replaced by a centralisation of power. States learn how to use new communications technologies to empower themselves over citizens. I think you can see this in the history of writing, the history of printing, the history of early electronic communications technologies such as radio. And I would not be surprised if we see this in the history of social media.

I do think with respect to international relations, new energy technologies are going to change things fundamentally, not that I can predict exactly how, but so much of the distribution of wealth and power in the international system in the last two hundred years has been connected to the fossil fuel energy system, that a reduction in the centrality of fossil fuels, I think, is going to redistribute political power in the international system and possibly have a decentralising effect on power as exercised in the international system.

How can we assure mankind's well-being with the sustainable health of our planet? Well, the first answer is we can't. It's beyond our power. The way I approach this question, however, is to think about particular challenges: one is nuclear war, a second is pandemics, and the third is climate destabilisation, in no particular order of importance or urgency. Nuclear war, while it used to be a preoccupation of the chattering classes, has grown less interesting to most. I think that's unfortunate. We've been quite lucky since nineteen forty-five. I think that's an unjustly neglected aspect of our health and well-being on the planet. Pandemics are not neglected; in the last 17 months, we've begun at last to pay attention to this particular risk to health and well-being on the planet. Covid-19 is actually a blessing in deep, deep, disguise. I say this because the SARS-COV-2 virus is not nearly as lethal as many other equally or even more transmissible viruses. The covid-19 pandemic has alerted us to the risk of lethal and highly transmissible viruses without exposing us to the maximum effect. And it is therefore like a societal vaccination. I would predict that for a generation we will have better preparation and better surveillance against the next emerging pathogens. If we're successful, however, complacency will return after a generation or two. Last, on climate destabilisation: the way I understand this problem is to think of three broad paths to climate stabilisation. The first of these would be a radical reduction in the use of energy and in certain materials that contribute to greenhouse gas accumulation; this seems a very unlikely path to me. The second path is a radical decarbonisation of the energy system so that fossil fuels account for a much smaller proportion of energy use. This would have to be combined with carbon sequestration on a grand scale. This, to me, seems the most desirable path. The third path is geoengineering to mask the effects of greenhouse gas loading. This frightens me terribly because the history of technology tells us that it's very hard to do only one thing at a time, and we cannot know the full range of effects of planetary scale tinkering of the sort that the proposals for geoengineering entail. I hope that the second path will prevail rather than the third.


Marius Dorobantu

Theological AnthropologyVrije Universiteit Amsterdam

My background is in theological anthropology, and it is reflected in how I see the challenges posed by AI technologies, in general, and the possibility of Artificial Super-Intelligence, in particular. What I notice is that most of the discussion takes place around the all-important questions of how to ensure that AI will not malfunction or inadvertently increase inequality, discrimination, or authoritarianism. These questions are hugely relevant and urgent. But an argument can be made that these are, in fact, the easy questions posed by AI. They don’t have easy answers, but at least everyone can more or less agree upon the desirable outcome.

Here are two questions that are, in my opinion, more difficult. They both deal with the more optimistic scenario, where we actually succeed in doing AI ‘right.’ Firstly, if we were somehow able to solve the alignment problem and instill in AI exactly the kind of goals and values that we wish, what would these be? Should, for example, humility be prioritised over courage? This is a topic where philosophy and religion, with their rich traditions of reflecting on such questions, could be of real help in our attempt to understand what we really value.

Secondly, what if advanced AI could relieve us of some of our most pressing issues? Given absolute power and freedom, it could govern the world more efficiently than we do, and it could, for example, potentially mitigate climate change and global poverty. Such a scenario seems desirable and, to some, even morally imperative. However, we might also sense that there is something intuitively wrong about completely delegating our decisions to AI, as benevolent as it might be. Why is an utopian world — where AI does all the work on our behalf and we become free to enjoy the cosiness of a pet’s life — so inherently repugnant?

Such questions may become more urgent in the near future. To be prepared, we need to accompany current scientific and technological efforts with serious moral, philosophical and spiritual reflections. Everyone agrees that AI should be for the good of humanity, but understanding what good means is just as important as learning how to endow AI with the right kind of values and goals.


Eric Salobir

PresidentOPTIC Network

Technologies of today allow us to reflect on the future: The two questions about what it means to be human and how to live together in the age of new technologies are complementary, and partially overlap. To answer these, we can start from a reflection centred on technologies that already exist. And then evolve towards new technologies such as brain-machine interfaces or the simulation of consciousness by AI. We can push the reflection towards things that are quite forward-looking like fusion or the connection of brains. And, in fact, we see that the questions that arise are ultimately the same. That is what is interesting. Ultimately, even if we don't have all the data to think about things that are very prospective, we can reflect on existential questions by relying on the data we have from technologies that are already mature now. If we identify the right vectors and good intellectual trajectories, we can almost reverse-engineer thoughts of the impact those advances may have. This reverse-engineering will allow us to identify first elements of answers for future technologies whose contours are still very imprecise.

3. How can we ensure the well-being of humankind and the sustainable future of our planet?

Climate emergency, rising population and increased resource needs threaten the balance of our planet. New fundamental scientific advances and future technologies could provide new ways to ensure a sustainable, responsible and inclusive development for all. This requires a new understanding at the edge and the convergence of formal, natural and human sciences. How can we supply the world population the necessary food and energy and regenerate our planet?


Gabriele Dürbeck

Professor for Literature and Cultural StudiesUniversität Vechta

Narratives enable us to create orientation and meaning in a highly complex or even chaotic world. Reconstructing narratives is therefore useful and important to understand why different people see the world in different terms, why they look at different causal relationships, and why they come to different conclusions about feasible and desirable futures. […] We can distinguish between five narratives of the Anthropocene: the disaster narrative, the court narrative, the narrative of the great transformation, the biotechnological narrative and the interdependence narrative of nature-culture.

The disaster narrative has an apocalyptic logic. Humankind is seen as a planetary killer, concerned only with its own short-term survival. It is a story of nature's decline and collapse caused by mankind. According to Elizabeth Kolbert’s book, the sixth mass extinction (including our own species) is a central feature of the Anthropocene. We have also the metaphor of a sick planet. The narrative has an urging and a warning function. Concerning the topic of food and energy, it poses that we are facing increasing conflicts over control of food and energy resources, even food and energy wars.” “The court narrative is questioning guilt and responsibility. It's plot follows the pattern of 'who has done it?'. We have villains and victims. Some speak about the 'eurocene' or a 'technocene' with the main polluters in the OECD and BRIC countries. And some others speak even of a 'capitalocene' to name the capital markets as the main culprits.

The narrative of the Great Transformation assumes that a rescue from climate crisis is still possible if we collaborate inter- and transnationally. So, it has a hypothetical happy end. It features a 'responsible stewardship' of the Earth systems. First, it calls for mitigation, thus reducing the causes of ecological destruction, and secondly for a reasonable adaptation. Schellnhuber from the PIK takes up the discourse of ecological modernisation with the idea of a continued economic growth, including fair burden sharing, so potential victims are avoided. A responsible and a sustainable society needs, many people say, a radical cultural change with a circular economy and different ways of consumption. … Here, we have the metaphor of the world gardener or the earth gardener. And the villains are the people who are unwilling to contribute to the course of this transformation.

In the biotechnological narrative the heroes are the inventors, technologists, and venture capitalists. So, we have different features. For example, the geo-engineering or solar geo-engineering. Some people say, we cannot cope with climate crisis without any climate engineering in coming times. But we also see the problem of unintended consequences. We also have the discussions on the Green Revolution 2.0, technologically intensified seeds and crops, but which cause new inequalities.

The nineteen people who have formed the eco modernist manifesto speak of a great Anthropocene, with solar energy, with nuclear energy and with intensive farming and so on. The villains are the people who still cling to romantic concepts of nature and all the people who are proponents of the precautionary principle.

“The interdependence narrative of nature-culture: The anthropos, the we, are seen as a part of networks of distributed actors that also include animals, plants, microorganisms and fungi. Proponents of this narrative speak of multispecies entanglements and multispecies justice. They problematise the long-established juxtaposition of nature and culture. And they also question the 'we' as a homogeneous or universal entity and assume multiple fractions of humanity. The interdependence narrative puts at the centre the mutual relationship between humans and other species. Its recognition of the contribution of other species to human well-being and civilisation leads to a responsibility of humans for other species. This has potentially far-reaching consequences for the use and treatment of animals and maybe also plants and food systems. From the perspective of an interspecies ethics and multispecies justice, the reduction of farm animals to bio-reactors for protein generation becomes unjustifiable


John Dryzek

Professor at the Centre for Deliberative Democracy and Global GovernanceUniversity of Canberra

“Politics of the Anthropocene” How can we ensure the well-being of humankind, the sustainable future of our planet? My thoughts are published in my co-authored book The Politics of the Anthropocene (2020). The basic point of the book is that dominant institutions, including states, markets and international organisations, developed under relatively benign Holocene conditions in which the influence of a potentially unstable earth system was simply not recognised. The Holocene is the epoch preceding the Anthropocene, the last eleven thousand years or so of unusual stability in the earth system. My colleague Will Steffen likes to say that the Holocene represents the only state of the Earth system we know for sure that can support human civilisation. The problem is the dominant institutions developed under the Holocene are not fit for purpose in the Anthropocene. The institutions that developed in the Holocene have a tendency toward pathological path dependencies, which makes them very resistant to change. That's why we're stuck with fossil fuels; these institutions generate forms of feedback which reinforce their own necessity, but largely ignore the condition of the earth system.

The opposite of that kind of pathological path dependency is reflexivity, which is the ability of a structure, process or set of ideas to change itself in light of reflection on its performance; the capacity to be something different rather than just to do different things. So, it's much more fundamental than adaptiveness for example. Ecological reflexivity in particular recognises the active influence of the earth system itself, as no longer a passive backdrop against which humanity and its institutions operate. Our institutions developed forms of feedback which systematically ignore feedback from ecological systems. Ecological reflexivity also involves recognising non-human entities, be they local ecosystems or the Earth system itself, as active players, not as things that we just operate on but things that are active, capable of causing surprises. This is where the Anthropocene narrative comes in: ecological reflexivity also involves looking ahead. It requires foresight, anticipation of potentially catastrophic state shifts in the system and acting to prevent that. That's really demanding. And that's something which current institutions are really bad at. Democracies, for example, can sometimes respond pretty effectively to crises, but they're very, very bad at anticipating crises. So, what to do? We need to start from where we are now, rather than just postulate the sort of models which might exist sometime in the future, and look at how current institutions, practises and structures might be reformed.

We find some encouraging hints: the global governance of climate change, for example, has transformed over time. And it does show just a little hint of reflexivity, a recognition that the system was not working. The Paris agreement for example, which after several decades of negotiations which got nowhere, involves a shift to hybrid multilateralism, a combination of top-down and bottom-up processes, as well as the orchestration of the role of non-governmental actors. We can also think of trying to produce a deliberative collective intelligence that is much more than the sum of its parts. My co-author Richard Norgaard analysed the Millennium Ecosystem Assessment and looked at the role of deliberation across different forms of scientific expertise. This kind of vernacular language, which integrated different kinds of expertise, also provides an opening for meaningful citizen participation, which could help construct a reflexive, deliberative science. It is also possible to rethink justice in order to recognise influences across space and time much more than do our current dominant conceptions of justice, moving toward what we call planetary justice, which also involves thinking more explicitly about justice towards future generations.

Image credit: Climeworks direct air capture plant to sky image is © Climework, by Julia Dunlop