Artificial Intelligence (AI) aims to build machines that are able to behave in ways we associate with human activity: perceiving and analysing our environment, taking decisions, communicating and learning. There are various approaches to achieving this. The most well-known, and arguably most advanced, is machine learning (ML), which itself has various broad approaches.
Deep learning relies on an information processing architecture known as neural networks, which are loosely modelled on the brain. When fed large amounts of data, neural networks learn to recognise statistical patterns, which can then be used to solve a variety of complex tasks in areas like vision, language or game playing. Systems powered by deep learning have crossed some impressive milestones over the past decade. Computer vision systems identified objects better than humans in 2015.1 The following year, they beat a Go champion and started playing complex video games.2 Autonomous cars have driven tens of millions of kilometres with very few accidents.3 Last year, AI predicted the structure of nearly every protein known to man and brought chatbots with powerful linguistic capabilities to the general public.45
This rapid progress is primarily due to the increasing availability of training data and computing power. There is growing evidence that AI capabilities scale reliably with model size, leading many AI research labs and organisations to focus on building ever larger systems.67 However, deep learning-based AI remains prone to bias and often cannot generalise what it has learned to new situations. The inner workings of neural nets are also opaque and they are inefficient learners, requiring vast amounts of data, processing power and energy. This has led to predictions that other approaches will be required to achieve more adaptable, efficient and trustworthy AI that can be deployed broadly across society.8 That said, many in the field believe continued scaling could soon resolve these outstanding challenges, and that society needs to prepare for the disruptive effects of AI that can outperform humans in a wide range of tasks.
Selection of GESDA best reads and key reports
The 2023 AI Index by Stanford University Institute for Human-Centered Artificial Intelligence provides a comprehensive snapshot of AI's trajectory this year, ranging from the growing ethical discussions surrounding AI to notable strides in diversity within the field. Published in March 2023, Neurosymbolic AI: the 3rd wave by Artur d’Avila Garcez and Luís C. Lamb pioneers into the realm of neurosymbolic computing, illuminating a future where neural networks intertwine with symbolic reasoning, setting the stage for more trustworthy and interpretable AI systems. An analog-AI chip for energy-efficient speech recognition and transcription was presented in August by a collaborative effort from US and Japanese researchers. The paper underscores the revolutionary potential of analogue in-memory computing (analog-AI) in optimising energy efficiency for vast AI models.