Deeper Machine Learning
Comment
Stakeholder Type

Deeper Machine Learning

1.1.1

Sub-Field

Deeper Machine Learning

Many advances in deep learning have involved humans laboriously labelling vast amounts of data for algorithms to be trained on. More recently, though, a new paradigm known as self-supervised learning has accelerated progress. This was magnified in the case of language tasks with transformers, which learn to predict “engineered labels” by completing texts on far more data than before.8

Future Horizons:

×××

5-yearhorizon

Deep learning models grow in scale

Further scaling of deep-learning models leads to performance that is increasingly indistinguishable from humans in language and vision tasks. Widespread deployment by businesses reshapes significant sections of the economy, such as customer service, content generation and programming. AI agents capable of acting autonomously are deployed in low-risk areas, providing assistance to humans. Most countries adopt AI regulations to limit the potential negative impacts of AI such as bias, misinformation and privacy invasion. There is also a push to clarify copyright protections for those whose data is used to train AI.

10-yearhorizon

Workforce disruption necessitates radical intervention from policymakers

Deep-learning systems outperform human knowledge workers in a wide range of professions. AI agents start to interact with each other, creating a new AI workforce operating alongside humans. The resultant workforce disruption requires policymakers to rethink employment and wealth distribution policies. Two currently open possibilities about exponential increases in model size are resolved. One is whether they outpace hardware improvements and data availability, leading to a plateau in scaling efforts, and prompting renewed innovation in AI methodological developments. The other is whether further scaling of model and data structures continue to lead to the emergence of missing ingredients to improve reasoning and trustworthiness of AI.

25-yearhorizon

Deep learning’s influence is ubiquitous in daily human life

Given current acceleration, at this timescale predictions about AI have become incredibly difficult. If scaling laws hold, then AI is likely to reach superhuman capabilities in all domains. In this scenario, it is also likely to become self-improving, leading to runaway progress that is impossible to forecast — and potentially dangerous, requiring urgent policy decisions about AI’s agency and responsibility. However, it remains possible that there are fundamental barriers that scaling alone will not resolve, particularly in areas requiring advanced logic; in this case, deep learning will still be a powerful tool that is able to match human performance on many tasks, but efforts to build generally intelligent AI will refocus on combining it with alternative AI approaches.

This has led to rapid improvements in AI language and coding capabilities and is also showing promise in domains like vision and robotic control.91011 So far, transformer performance scales reliably with model or dataset size, and the largest systems appear to demonstrate emergent capabilities they were not explicitly trained for, such as some creativity and limited reasoning capabilities in language.12 They have also been applied to scientific problems such as protein folding with considerable success.13

Nonetheless, there is evidence that the current “scale” approach is learning mainly by memorisation, exemplified by poor performance on more complex tasks such as mathematics and logic.14 Research has also suggested that seemingly emergent capabilities are simply the result of flawed testing.15 And because these models are statistical in nature, they readily learn biases from training data and can confidently “hallucinate” facts that are not true.16

The imperative to build ever-larger models also means cutting-edge AI research is increasingly accessible only to well-funded labs possessing significant compute power. There is also interest in creating smaller models,17 but access to data and hardware remains a key differentiator and is stoking geopolitical competition over access to AI chips.18

Deeper Machine Learning - Anticipation Scores

The Anticipation Potential of a research field is determined by the capacity for impactful action in the present, considering possible future transformative breakthroughs in a field over a 25-year outlook. A field with a high Anticipation Potential, therefore, combines the potential range of future transformative possibilities engendered by a research area with a wide field of opportunities for action in the present. We asked researchers in the field to anticipate:

  1. The uncertainty related to future science breakthroughs in the field
  2. The transformative effect anticipated breakthroughs may have on research and society
  3. The scope for action in the present in relation to anticipated breakthroughs.

This chart represents a summary of their responses to each of these elements, which when combined, provide the Anticipation Potential for the topic. See methodology for more information.