1.1. Advanced AI
Download PDF
1.1. Advanced AI
Use the future to build the present
Advanced AI
Comment
Stakeholder Type
1.1Advanced AI1.2QuantumRevolution1.3UnconventionalComputing1.4AugmentedReality1.5CollectiveIntelligence2.1CognitiveEnhancement2.2HumanApplicationsof GeneticEngineering2.3HealthspanExtension2.4ConsciousnessAugmentation2.5Organoids2.6FutureTherapeutics3.1Decarbonisation3.2EarthSystemsModelling3.3FutureFoodSystems3.4SpaceResources3.5OceanStewardship3.6SolarRadiationModification3.7InfectiousDiseases4.1Science-basedDiplomacy4.2Advancesin ScienceDiplomacy4.3Foresight,Prediction,and FuturesLiteracy4.4Democracy-affirmingTechnologies5.1ComplexSystemsScience5.2Futureof Education5.3Future Economics,Trade andGlobalisation5.4The Scienceof theOrigins of Life5.5SyntheticBiology
1.1Advanced AI1.2QuantumRevolution1.3UnconventionalComputing1.4AugmentedReality1.5CollectiveIntelligence2.1CognitiveEnhancement2.2HumanApplicationsof GeneticEngineering2.3HealthspanExtension2.4ConsciousnessAugmentation2.5Organoids2.6FutureTherapeutics3.1Decarbonisation3.2EarthSystemsModelling3.3FutureFoodSystems3.4SpaceResources3.5OceanStewardship3.6SolarRadiationModification3.7InfectiousDiseases4.1Science-basedDiplomacy4.2Advancesin ScienceDiplomacy4.3Foresight,Prediction,and FuturesLiteracy4.4Democracy-affirmingTechnologies5.1ComplexSystemsScience5.2Futureof Education5.3Future Economics,Trade andGlobalisation5.4The Scienceof theOrigins of Life5.5SyntheticBiology

Emerging Topic:

1.1Advanced AI

    Associated Sub-Fields

    Artificial Intelligence (AI) aims to build machines that are able to behave in ways we associate with human activity: perceiving and analysing our environment, taking decisions, communicating and learning. There are various approaches to achieving this. The most well-known, and arguably most advanced, is machine learning (ML), which itself has various broad approaches.

    Deep learning relies on an information processing architecture known as neural networks, which are loosely modelled on the brain. When fed large amounts of data, neural networks learn to recognise statistical patterns, which can then be used to solve a variety of complex tasks in areas like vision, language or game playing. Systems powered by deep learning have crossed some impressive milestones over the past decade. Computer vision systems identified objects better than humans in 2015.1 The following year, they beat a Go champion and started playing complex video games.2 Autonomous cars have driven tens of millions of kilometres with very few accidents.3 Last year, AI predicted the structure of nearly every protein known to man and brought chatbots with powerful linguistic capabilities to the general public.45

    This rapid progress is primarily due to the increasing availability of training data and computing power. There is growing evidence that AI capabilities scale reliably with model size, leading many AI research labs and organisations to focus on building ever larger systems.67 However, deep learning-based AI remains prone to bias and often cannot generalise what it has learned to new situations. The inner workings of neural nets are also opaque and they are inefficient learners, requiring vast amounts of data, processing power and energy. This has led to predictions that other approaches will be required to achieve more adaptable, efficient and trustworthy AI that can be deployed broadly across society.8 That said, many in the field believe continued scaling could soon resolve these outstanding challenges, and that society needs to prepare for the disruptive effects of AI that can outperform humans in a wide range of tasks.

    Selection of GESDA best reads and key reports

    The 2023 AI Index by Stanford University Institute for Human-Centered Artificial Intelligence provides a comprehensive snapshot of AI's trajectory this year, ranging from the growing ethical discussions surrounding AI to notable strides in diversity within the field. Published in March 2023, Neurosymbolic AI: the 3rd wave by Artur d’Avila Garcez and Luís C. Lamb pioneers into the realm of neurosymbolic computing, illuminating a future where neural networks intertwine with symbolic reasoning, setting the stage for more trustworthy and interpretable AI systems. An analog-AI chip for energy-efficient speech recognition and transcription was presented in August by a collaborative effort from US and Japanese researchers. The paper underscores the revolutionary potential of analogue in-memory computing (analog-AI) in optimising energy efficiency for vast AI models.

    Emerging Topic:

    Anticipation Potential

    Advanced AI

    Sub-Fields:

    Deeper Machine Learning
    Multimodal AI
    Intelligent devices
    Alternative AI
    The past year has seen staggering developments around Deep Learning and Large Language Models, with, for example, the release of ChatGPT and other very large models. While the future is very hard to predict, the impact on society and on the environment of Advanced AI is rated as very high. However, because of its short path to maturity (and as it has already received considerable attention), the Anticipation Scores are relatively low. Alternative approaches to AI could nonetheless be transformative in 10 years, suggesting more work is needed to understand its potential implications.

    GESDA Best Reads and Key Resources

    Article

    AGI Safety Literature Review, May 2018, arXiv

    Published:

    7th Aug 2021
    The development of Artificial General Intelligence (AGI) promises to be a major event. Along with its many potential benefits, it also raises serious safety concerns (Bostrom, 2014). The intention of this paper is to provide an easily accessible and up-to-date collection of references for the emerging field of AGI safety. A significant number of safety problems for AGI have been identified. We list these, and survey recent research on solving them. We also cover works on how best to think of AGI from the limited knowledge we have today, predictions for when AGI will first be created, and what will happen after its creation. Finally, we review the current public policy on AGI.

    Article

    Research Priorities for Robust and Beneficial Artifcial Intelligence, winter 2015, AAAS

    Published:

    7th Aug 2021
    Success in the quest for artificial intelligence has the potential to bring unprecedented benefits to humanity, and it is therefore worthwhile to inves- tigate how to maximize these benefits while avoiding potential pitfalls. This article gives numerous examples (which should by no means be construed as an exhaustive list) of such worthwhile research aimed at ensuring that AI remains robust and beneficial.

    Article

    Smart partnerships amid great power competition: AI, China and the Global Quest for Digital Sovereignty (REPORT) // January 2021, Atlantic Council Geotech Center

    Published:

    7th Aug 2021
    The report captures key takeaways from various roundtable conversations, identifies the challenges and opportunities that different regions of the world face when dealing with emerging technologies, and evaluates China’s role as a global citizen. In times of economic decoupling and rising geopolitical bipolarity, it highlights opportunities for smart partnerships, describes how data and AI applications can be harnessed for good, and develops scenarios on where an AI-powered world might be headed.

    Article

    Why AI is Harder Than We Think // 28.04.2021, arXiv

    Published:

    7th Aug 2021
    Since its beginning in the 1950s, the field of artificial intelligence has cycled several times between periods of optimistic predictions and massive investment (“AI spring”) and periods of disappointment, loss of confi- dence, and reduced funding (“AI winter”). Even with today’s seemingly fast pace of AI breakthroughs, the development of long-promised technologies such as self-driving cars, housekeeping robots, and conversational companions has turned out to be much harder than many people expected. One reason for these repeating cycles is our limited understanding of the nature and complexity of intelligence itself. In this paper I describe four fallacies in common assumptions made by AI researchers, which can lead to overconfident predictions about the field. I conclude by discussing the open questions spurred by these fallacies, including the age-old challenge of imbuing machines with humanlike common sense.