The Philosophical Lens: The Future of People, Society and the Planet(s)
Download PDF
The Philosophical Lens: The Future of People, Society and the Planet(s)
Use the future to build the present
The Philosophical Lens: The Future of People, Society and the Planet(s)
Comment
Stakeholder Type
1.1Advanced AI1.2QuantumRevolution1.3UnconventionalComputing1.4AugmentedReality1.5CollectiveIntelligence2.1CognitiveEnhancement2.2HumanApplicationsof GeneticEngineering2.3HealthspanExtension2.4ConsciousnessAugmentation2.5Organoids2.6FutureTherapeutics3.1Decarbonisation3.2EarthSystemsModelling3.3FutureFoodSystems3.4SpaceResources3.5OceanStewardship3.6SolarRadiationModification3.7InfectiousDiseases4.1Science-basedDiplomacy4.2Advancesin ScienceDiplomacy4.3Foresight,Prediction,and FuturesLiteracy4.4Democracy-affirmingTechnologies5.1ComplexSystemsScience5.2Futureof Education5.3Future Economics,Trade andGlobalisation5.4The Scienceof theOrigins of Life5.5SyntheticBiology
1.1Advanced AI1.2QuantumRevolution1.3UnconventionalComputing1.4AugmentedReality1.5CollectiveIntelligence2.1CognitiveEnhancement2.2HumanApplicationsof GeneticEngineering2.3HealthspanExtension2.4ConsciousnessAugmentation2.5Organoids2.6FutureTherapeutics3.1Decarbonisation3.2EarthSystemsModelling3.3FutureFoodSystems3.4SpaceResources3.5OceanStewardship3.6SolarRadiationModification3.7InfectiousDiseases4.1Science-basedDiplomacy4.2Advancesin ScienceDiplomacy4.3Foresight,Prediction,and FuturesLiteracy4.4Democracy-affirmingTechnologies5.1ComplexSystemsScience5.2Futureof Education5.3Future Economics,Trade andGlobalisation5.4The Scienceof theOrigins of Life5.5SyntheticBiology

Deep Dive:

The Future of People, Society and the Planet(s)

The Philosophical Lens

    GESDA anticipates scientific and technological advancements to develop inclusive and global solutions for a sustainable future. Three fundamental and overarching questions drive its work:

    • Who are we, as humans? What does it mean to be human in the era of robots, gene editing, and augmented reality?

    • How can we all live together? What technology can be deployed to help reduce inequality, improve well-being, and foster inclusive development?

    • How can we ensure the well-being of humankind and the sustainable future of our planet? How can we supply the world population with the necessary food and energy while regenerating our planet?

    The Science Breakthrough Radar provides an overview of trends and breakthrough anticipations at 5, 10 and 25 years in 42 science and technology emerging topics that could have a strong bearing on the answers to these questions. However, reaping the benefits of anticipated science and technology advances will require that we develop appropriate considerations about the future of humans at the individual, collective, and planetary levels. To this end, the “Philosophical Lens” section extends initial insights from the previous editions of the Radar. Building on expert knowledge from an inclusive and diverse panel of leading philosophers collected through individual interviews, it reflects on how the science and technology advances anticipated by the Radar might transform who we are as humans, how we live together as societies, and how we relate to the planet.

    By reflecting on the transformational potential of anticipated science and technology advances and the fundamental questions they raise, the Philosophy Lens has three main roles. First, it makes the underlying assumptions of these advances explicit by exploring their central points of tension and how they challenge the status quo. Second, it provides frameworks for making sense of these advances from the individual, the collective, and the global perspectives. To do so, it assesses the need to develop new concepts and reframe our questions about the individual, the collective, and the planet in light of said advances, while always eschewing simplistic answers. Third, it aims to help co-shape the environments that will facilitate the deployment of these science and technology advances toward shared goals.

    To fulfil its roles, this contribution acts in the spirit of an “honest knowledge broker”. In other words, we will always seek to open the solution space for future legal and regulatory frameworks of science and technology by expanding the scope of alternative choices available to decision-makers and relevant legitimate actors. Thereby, our purpose is to provide the elements to initiate debates about opportunities “to use the future to build the present” over the next two decades or so.

    Who are we, as humans?

    Science and technology advances described in Advanced Artificial Intelligence and Quantum Revolutions and Human Augmentation will radically transform our human condition at the individual level. These transformations will involve two core aspects of who we are as humans: our minds and our bodies. In doing so, they will compel us to rethink our concepts of the properties that we might have taken to be typical of us as humans and central to how we individuate ourselves. Such concepts include rationality, intelligence, consciousness, autonomy, agency (whether moral, epistemic or other), control, and identity (race, gender, etc.).

    As an example, rational intelligence in the Western tradition has long been viewed as distinctively human (Aristotle’s zoon logikon) and was implicitly paired with other typically human properties such as consciousness and understanding. But nowadays, non-conscious machines lacking any understanding can outperform human rational intelligence at sophisticated problem-solving tasks, thus prompting humans to acquire new heuristics by learning from the machines. This calls for a fundamental rethink of our very concept of rationality and what makes it so supposedly distinctive.1 Along these lines, future massive offloading or delegation of “non-basic” human cognitive abilities to technologies will forever increase the need for such careful reassessments.

    Rethinking the concepts at the basis of our understanding of who we are as humans invites us to reflect on the purported uniqueness of our human condition. What will be important to keep in mind in this regard is that all these basic human concepts are always value-laden. For instance, agency and individual autonomy are essential parts of contemporary human rights.2 All these basic human concepts thus come with a normative bearing that tends to crystallise into ideals endowed with a standardising power (e.g., ableism, sanism, ageism, etc. in the case of health-related values connected to Human Augmentation). And as such, they can be discriminative and exclusionary too — like the very concept of a human itself; the granting or denial of human status has served to justify and maintain oppressive systems throughout humankind’s history.

    A key question at the individual level will then be to consider whether we might want to preserve certain properties, or a combination thereof, as the sole remit of human beings — asking in which context(s) and for which task(s) — and if so, which would these be? As we hinted at in the previous editions of the Lens, a possible way forward to tackle this key question would be to adopt a pluralistic strategy whereby we would make room for a “de-anthropocentred” variety of said basic concepts, performing differently at different tasks and in different contexts, and which would then no longer be distinctive of us as humans — had they ever been. For instance, we might have artificial or computational creativity alongside human creativity without the latter being genuine while the former is counterfeit. Thereby, we might be able to better create distinct spaces for human-artificial collaboration. While such hybrid spaces in the domains of creativity, decision-making, emotion, problem-solving, etc. will no longer be specific to humans, we might still want to prioritise instances of human involvement in these activities. One challenge this would raise is how to avoid “speciesism” when focusing on human instantiations. If we might want prioritise concerns for our fellow human beings, will this always result in discrimination against non-human agents with which we may lack kinship relations?

    Considering the normative bearing and standardising power of these concepts, a challenge for us, as humans, will be to ensure their equal bestowal among individuals while striving to preserve a diversity of norms and ideals across cultures. At the same time, it might offer a tremendous opportunity to revise our basic ontological categories and oppositions — for example, subject vs. object, natural vs. artificial, humans vs. machines — towards a more respectful and integrative reality. In this context, it is not so much “who we are, as humans” that might matter anymore (in the search for some putatively “essential” properties), but “how we are, as humans, in the era of robots, gene editing, and augmented reality” — that is, in a world where artificially designed interfaces continuously interfere with our individual condition, as humans.

    How can we all live together?

    Science and technology advances described in Quantum Revolution & Advanced AI first and foremost, but also in Eco-Regeneration & Geoengineering more indirectly (as well as in Human Augmentation to a lesser extent), will radically transform the human condition at the collective level. These transformations will primarily concern the relational dimension of our human condition, that is, our “being human through others” (as per Ubuntu philosophy),3 our “political animal” condition (following Aristotle), or our “relational self” (in the Confucian tradition).4 In so doing, they will incite us to rethink the concepts and principles at the basis of our social interactions. This will involve our interpersonal relationships in a very concrete way via notions such as responsibility, accountability, trust, property, sovereignty, security, friendship, or (group) privacy. At a more abstract level, it will involve our social institutions via notions such as democracy, justice, or citizenship.

    The reason for these expected radical transformations is that we, as humans, are both socially and technologically interdependent beings, and that technological artefacts, on the other hand, are always embedded into socio-anthropological contexts made up of collective practices, norms, and values, which give these artefacts their cultural meaning.5 This complex relationality of who we are at the collective level implies that changes in our technological environment will reflect on our social structures and practices from a specific cultural perspective and thereby eventually affect our human condition. This is reminiscent of the previous Lenses. What might be specific to the current and coming digital ages in this regard is that emerging digital technologies — qua information and communication technologies — have the potential to directly target human basic social structures and practices at the most fundamental level. The unprecedented scale of emerging digital technologies such as general-purpose artificial intelligence6 combined with future technoscientific convergence trends means that this radical transformational potential might be pervasive across all domains of our human condition at the collective level.

    As with the individual level, the radical transformations of our human condition at the collective level by the science and technology breakthroughs anticipated in the Radar will challenge our most basic ontological categories, for instance, via the blurring of the real and the virtual or via the introduction of new kinds of non-human agents in our social realities, as we already noted previously. But emerging digital technologies specifically will also ever more alter the interindividual structures and dynamics of our communication environments (e.g., via echo chambers, epistemic bubbles, or group polarisation),78 putting at stake the key concepts and principles of our interindividual relationships we alluded to above (trust, privacy, friendship, for example). Likewise, emerging technologies in general will transform how our social institutions’ norms, values, and ideals are implemented on the ground, for instance, via the enforcement of policing strategies by ever more powerful surveillance technologies. In turn, these transformations of our social interactions and collective practices will feedback on the various dimensions of who and how we are, as humans, at the individual level, such as our well-being and mental health.

    A key question will then not only be what but also how technology can be deployed to help reduce inequality, improve well-being, and foster inclusive development, as GESDA aims to do. In other words, which frameworks could we use to ensure the inclusive, responsible, and sustainable deployment of emerging technologies at the collective level? Tackling this key question will require improving our understanding of our social and technological interdependent relationships as individuals. We might have to acknowledge the limits of the individualistic conception of the self as independent and autonomous, which is at the basis of modern Western culture, and complement it with a relational conception of autonomy9 by which humans, as individuals, realise their autonomy through their relationships with others in socio-technological environments — or “socio-technical systems”.10 In this context, technology would not primarily serve the indefinite extension of the modern individual’s limitless free will anymore but could aim at shared goals.

    In this spirit, gaining a new understanding of our socio-technological relationality as humans would be an opportunity to democratise technoscientific research and innovation. In a socio-technical system, sensitivity to societal impacts is key, from the early phases of the design of the technology, even at the anticipatory stage, and then along the complete life cycle from technological deployment to dismantling and replacing. Inclusive, participative, and deliberative processes running throughout all development and use stages of emerging technologies would thus help foster their inclusive, responsible, and sustainable benefit across humans at the collective level. At the same time, because of the potential implications of these technologies for how we are at the collective level (in terms of well-being, for example), the advancements anticipated by the Radar will invite us to consider the structures of ownership and control that we, as societies, ought to set up. This would require overcoming current “techno-power” systems essentially driven by financial and political incentives. Without such a shift in the socio-technological relationality of our human condition at the collective level, it is indeed unlikely that the science and technology advancements anticipated by the Radar will align with GESDA’s widely shared ambition to help reduce inequality, improve well-being, and foster inclusive development.

    How can we ensure the well-being of humankind and the sustainable future of the planet?

    Considering their environmental cost, most science and technology advances anticipated by the Radar will indirectly transform our human condition at the global level of our relationship to the planet. One consequence of anthropogenic environmental disruption, in general, will be the increased integration of the external environment into our very own human condition. This integration will proceed both from an environment-to-human direction, because of the increased individual and collective insecurity resulting from the foreseeable risk of ecological disasters, and from a human-to-environment direction, on the other hand, via the development of ever “smarter” environments. Breakthroughs in Eco-Regeneration and Geoengineering science and technology designed to address and mitigate the risk of ecological disasters will be of paramount significance: the environmental crisis will put even more pressure to deploy radical (and costly) technology solutions to avert disaster. Through the deployment of these scientific advances and technologies, the dichotomous human-environment relationship, where humans are “the measure of all things” (Protagoras) and “the masters and possessors of nature” (Descartes) ruling “over the fish in the sea and the birds in the sky, over the livestock and all the wild animals, and over all the creatures that move along the ground” (Genesis 1:26), might eventually transform into a more responsible, sustainable, and inclusive relationship between humans and the environment — a “de-centring” of our human condition.

    While challenging human-centredness toward a more ecocentric perspective, a key question posed by climate engineering technologies will then be to consider the limits of our relationship to nature as an instrumental resource under our control. Here, a challenge will be to negotiate an acceptable trade-off between the recognition of nature’s intrinsic value and the need for us, as humankind, to preserve a certain degree of control over nature. And in doing so, we will have to demonstrate epistemic, moral, and ontological humility by recognising that we ultimately have no way out of our human condition in how we come to know, value, and belong to our environment. In the Anthropocene, the concept of environmental catastrophe can only be considered in terms of the human being capable of representing it. This subtle combination of anthropocentric humility and “de-centredness” might be decisive if we want to transition from a control-and-domination driven to a more sustainable, responsible, and inclusive relationship with the planet.

    On a more practical note, the sustainable, responsible, and inclusive deployment of technologies to ensure the well-being of humankind and the sustainable future of our planet will presuppose normative frameworks based on public interest. Another challenge for human societies at the global level of their relationship with the planet will then be to find principled ways to distribute the costs and benefits of climate change, including climate engineering technologies’ responses to anthropogenic environmental disruption if the cost and benefits of these technologies are not to exacerbate existing inequalities. Public discussions about such cost/benefit trade-offs along with the appropriate public control over technology usage on the planet will be needed to avoid an increased climate divide between citizens, companies, or countries. Our human condition at the global level of our relationship to the planet might thus ever more become a (geo)political issue above all.

    Transversal observations

    The Philosophy Lens reflects on how the science and technology advances anticipated by the Radar might transform who we are as humans, how we live together as societies, and how we relate to the planet in light of GESDA's fundamental and overarching questions. Now that we have considered each level in turn, we might take a step back and ask what, if anything, is distinctive about the science and technology advances anticipated by the Radar in comparison to similar transformational patterns in the technoscientific history of humankind. At this stage, a few observations crossing these three levels can be drawn.

    On several occasions, we have highlighted the radical transformational potential of the science and technology advances anticipated by the Radar. However, it might not be the “metaphysical” nature of all three levels that will be transformed by emerging technologies as much as their concrete realisations. For instance, individual human lives will be radically transformed by emerging technologies, not by any putative human nature — whatever it might be. In this regard, emerging technologies might merely reveal, make more salient, and increase, implicit aspects of our human condition. As an example, cognitive enhancement technologies might make it more obvious that human cognition is and has always been an extended, distributed, and collective activity — that is, a socially embedded activity distributed among a collective of individuals whose cognitive processes heavily rely on external technological devices.11

    From a transversal perspective, a striking fact about the revelatory power of emerging technologies is that they tend to increase, and make more salient, existing entanglements between the individual, collective, and environmental levels, through interdependencies and feedback loops. Specifically, the more transformational a technology, the more its impact will permeate across all three levels, simultaneously disrupting the most basic concepts of human self-understanding, social interrelations, and ontologies. For instance, data-driven intelligence transforms how we are individuated through algorithmic group affiliations, which in turn requires superseding an individualistic conception of privacy by a collective one, in order to eventually protect the individual’s autonomy and identity.1213 Will it be possible to keep the “founding principle” that nothing should interfere with the individual’s ability to take responsibility? A critical challenge in this regard will be to strike an appropriate balance between the integration and distinction of all three levels toward more respectful boundaries.

    Overall, the transformational potential of the science and technology advances anticipated by the Radar is unprecedented in its scale and pace. For our human condition at the individual, collective and global levels, these anticipated science and technology advances might be more pervasive and immersive than ever before. They might also continually increase the mediation of our interpersonal and human-world relationships by technological devices and conversely diminish direct human-to-human or human-to-world interactions. Due to their intricate complexion, these transformational features of anticipated science and technology advances will involve “wicked problems” at all three levels of our human condition — that is, incomplete, contradictory problems with complex interdependencies and no unique answer nor definite stopping point, but whose solution creates or reveals new problems instead (such as security and trust in the context of technology-mediated interactions).14 This will entail an increased unpredictability in decision-making contexts, with unknown unknowns about potential secondary impacts that go beyond the foreseeable.

    Such situations are typically captured by the Collingridge dilemma in technology assessment, governance, and regulation, which states a double-bind quandary between an information problem and a power problem: namely, that impacts cannot be easily predicted until the technology is extensively developed and widely used, while control or change is difficult when the technology has become entrenched.15 In other words, we either have high power but low information, or high information but low power, over a given technology. Here, the challenge will be to find the right adjustment between an acceptable predictive uncertainty and a reasonable capacity to influence technology development in a socially and environmentally desirable way.

    A way forward?

    The apparent synchronised, global awareness of emerging risks related to some science and technology advances anticipated by the Radar may be due to their proximity (as with artificial intelligence, for example), combined with the specifics of these anticipated advances (digital environments are maybe “more” artificial than any other technology before, for example). This might give a greater ability, or opportunity, for us, as societies, to purposefully steer science and technology development toward shared, macro-level goals. However, a systemic problem remains: we presently lack appropriate environments for responsible, sustainable technoscientific research and innovation to prevail. At this point, we might need to consider realigning technoscientific research and innovation incentives to promote inclusive and sustainable global human flourishing. A preliminary approach would be to better define what “global human flourishing” could mean in future technoscientific ages. And we would also certainly have to re-envision the traditional metrics we used for the global human flourishing we will try to foster.

    There is a need for new tools to address global ethical issues. A way forward would be to reinvent ourselves in relation to technoscientific environments “from the inside out”, first instead of through external research and innovation policy frameworks only (such as RRI or ESLA),1617 and further, to then derive appropriate normative principles from a much deeper understanding of who we are, as individual and collective, in a global context — a “global ethics,” as we called it in the previous Lens. Notwithstanding complex cultural differences and geopolitical factors, the paradigm shift from an individualistic to a relational conception of autonomy, where autonomous individuals are socio-historico-technologically situated and interdependent across space and time (including from cross-cultural and intergenerational perspectives), would help move away from dominating ethno- and anthropocentrism toward a more inclusive and sustainable integration of humans, technology, and the planet.

    In turn, this paradigm shift might change how we think about normative issues in science and technology ethics, policy, governance, and regulation. Beyond the “Design for Values” movement, it would allow an inclusive, participatory, and deliberative democratisation of the technology life cycle, from design to replacement via development and use. In this context, education will be key to improving users’ technological literacy at the global level. Thereby, we might ensure better control, prevention, and resistance against problematic aspects of technology (e.g., artificial intelligence’s built-in biases), minimise divides in terms of power, control, and access to technology, and increase awareness of intergenerational responsibility. But this might not be enough to further responsible, sustainable technoscientific research and innovation. Considering the multi-faceted dimension of the science and technology advances anticipated by the Radar, collaborations across domain-specific technoscientific disciplines and more holistic technoscientific curricula (including humanities and social sciences) will also be critical for experts to elucidate the core normative implications of using general-purpose technologies in multiple fields while avoiding endlessly reinventing the wheel. Keeping in mind GESDA's three fundamental and overarching questions throughout the deployment of these advances would help them “ultimately lead to a better life for humanity on a healthy planet,” in terms used in a previous Radar.

    In view of the science and technology advances anticipated by the Radar, the time might thus have come to “re-envision ethics” and our normative frameworks, as we suggested in the previous Lens. Combining a top-down implementation of normative principles that drive incentives for technoscientific research and innovation with bottom-up inputs that relay demands from citizen communities might pave the way toward a “positive” notion of ethics whereby normative principles would be integral enablers of technology for it to become a public good. By designing environments that can facilitate ethical choices, actions, or processes, we might eventually be in a position to anticipate and up-manoeuvre science and technology’s implications at the individual, collective, and planetary levels. While it is exciting to think that we can engineer the future we want, we also have to consider what we might lose. Public and informed discussions of trade-offs in such a complex and uncertain space will be central to the honest brokering of science and technology.