Best Reads
Download PDF
Best Reads
Use the future to build the present
Best Reads
Comment
Stakeholder Type
1.1Advanced AI1.2QuantumRevolution1.3UnconventionalComputing1.4AugmentedReality1.5CollectiveIntelligence2.1CognitiveEnhancement2.2HumanApplicationsof GeneticEngineering2.3HealthspanExtension2.4ConsciousnessAugmentation2.5Organoids2.6FutureTherapeutics3.1Decarbonisation3.2EarthSystemsModelling3.3FutureFoodSystems3.4SpaceResources3.5OceanStewardship3.6SolarRadiationModification3.7InfectiousDiseases4.1Science-basedDiplomacy4.2Advancesin ScienceDiplomacy4.3Foresight,Prediction,and FuturesLiteracy4.4Democracy-affirmingTechnologies5.1ComplexSystemsScience5.2Futureof Education5.3Future Economics,Trade andGlobalisation5.4The Scienceof theOrigins of Life5.5SyntheticBiology
1.1Advanced AI1.2QuantumRevolution1.3UnconventionalComputing1.4AugmentedReality1.5CollectiveIntelligence2.1CognitiveEnhancement2.2HumanApplicationsof GeneticEngineering2.3HealthspanExtension2.4ConsciousnessAugmentation2.5Organoids2.6FutureTherapeutics3.1Decarbonisation3.2EarthSystemsModelling3.3FutureFoodSystems3.4SpaceResources3.5OceanStewardship3.6SolarRadiationModification3.7InfectiousDiseases4.1Science-basedDiplomacy4.2Advancesin ScienceDiplomacy4.3Foresight,Prediction,and FuturesLiteracy4.4Democracy-affirmingTechnologies5.1ComplexSystemsScience5.2Futureof Education5.3Future Economics,Trade andGlobalisation5.4The Scienceof theOrigins of Life5.5SyntheticBiology

Appendices:

Best Reads

    The GESDA Best Reads provide a carefully curated list of recent key articles and resources in relation to the scientific emerging topics described in the trend section. The GESDA BestReads are available as a monthly newsletter at the following link.

    Article

    AGI Safety Literature Review, May 2018, arXiv

    Published:

    7th Aug 2021
    The development of Artificial General Intelligence (AGI) promises to be a major event. Along with its many potential benefits, it also raises serious safety concerns (Bostrom, 2014). The intention of this paper is to provide an easily accessible and up-to-date collection of references for the emerging field of AGI safety. A significant number of safety problems for AGI have been identified. We list these, and survey recent research on solving them. We also cover works on how best to think of AGI from the limited knowledge we have today, predictions for when AGI will first be created, and what will happen after its creation. Finally, we review the current public policy on AGI.

    Article

    Research Priorities for Robust and Beneficial Artifcial Intelligence, winter 2015, AAAS

    Published:

    7th Aug 2021
    Success in the quest for artificial intelligence has the potential to bring unprecedented benefits to humanity, and it is therefore worthwhile to inves- tigate how to maximize these benefits while avoiding potential pitfalls. This article gives numerous examples (which should by no means be construed as an exhaustive list) of such worthwhile research aimed at ensuring that AI remains robust and beneficial.

    Article

    Smart partnerships amid great power competition: AI, China and the Global Quest for Digital Sovereignty (REPORT) // January 2021, Atlantic Council Geotech Center

    Published:

    7th Aug 2021
    The report captures key takeaways from various roundtable conversations, identifies the challenges and opportunities that different regions of the world face when dealing with emerging technologies, and evaluates China’s role as a global citizen. In times of economic decoupling and rising geopolitical bipolarity, it highlights opportunities for smart partnerships, describes how data and AI applications can be harnessed for good, and develops scenarios on where an AI-powered world might be headed.

    Article

    Why AI is Harder Than We Think // 28.04.2021, arXiv

    Published:

    7th Aug 2021
    Since its beginning in the 1950s, the field of artificial intelligence has cycled several times between periods of optimistic predictions and massive investment (“AI spring”) and periods of disappointment, loss of confi- dence, and reduced funding (“AI winter”). Even with today’s seemingly fast pace of AI breakthroughs, the development of long-promised technologies such as self-driving cars, housekeeping robots, and conversational companions has turned out to be much harder than many people expected. One reason for these repeating cycles is our limited understanding of the nature and complexity of intelligence itself. In this paper I describe four fallacies in common assumptions made by AI researchers, which can lead to overconfident predictions about the field. I conclude by discussing the open questions spurred by these fallacies, including the age-old challenge of imbuing machines with humanlike common sense.

    1.1.1Deeper Machine Learning

    1.1.2Multimodal AI

    1.1.3 Intelligent devices

    1.1.4Alternative AI

    Article

    Here, there and everywhere

    Published:

    7th Aug 2021
    The Economist offers authoritative insight and opinion on international news, politics, business, finance, science, technology and the connections between them.

    1.2.1Quantum Communication

    1.2.2Quantum Computing

    1.2.3Quantum Sensing and Imaging

    1.2.4Quantum Foundations

    1.3.1Neuromorphic Computing

    1.3.2Organoid Intelligence

    1.3.3Cellular Computing

    1.3.4Optical Computing

    1.4.1Augmented reality hardware

    1.4.2Augmented experiences

    1.4.3AR platforms