Future of generative AI
Comment
Stakeholder Type

Future of generative AI

1.1.1

Sub-Field

Future of generative AI

The most widely appreciated progress in AI has come from models able to generate new content, such as text, images, video, audio and software code. Generative AI, as the field is known, is underpinned by extremely large models trained on enormous text and image datasets scraped from the internet.

Future Horizons:

×××

5-yearhorizon

Generative models continue to improve

Research achieves continued improvement of generative models using current architectures (such as transformers), focusing on better data efficiency and robustness. There is widespread use of synthetic data generation to supplement limited datasets. Significant advances in alternative learning paradigms beyond brute-force scaling are made, with new architectures appearing that enable learning from smaller, more diverse datasets. AI models begin to synthesise truly novel data, expanding creativity and generalisation.

10-yearhorizon

Generative AI incorporates different kinds of information

Generative AI incorporates multimodal and grounded information more effectively, potentially integrating embodied learning and world models. Ethical alignment mechanisms reach practical deployment, fostering greater societal trust.

25-yearhorizon

Generative-AI systems display adaptive creativity

Further progress enables generative-AI systems to display adaptive creativity and self-supervised scenario generation, facilitating breakthroughs in science, the arts and society. Fundamental innovations reshape collaborative research paradigms between academia and industry.

Current generative AI research is heavily reliant on brute-force scaling with ever-larger datasets and computing power, raising questions about sustainability and future data limitations. There is a call for architectural innovation enabling AI to learn efficiently from modest data,4 drawing from concepts in human learning and cognitive science.

Approaches to cope with data limitations include generating synthetic data, employing new data diversification algorithms5,6 and developing smarter, data-efficient learning paradigms. However, progress in generative AI must overcome issues with robustness and generalisation by moving from “vanilla” data synthesis to creating data that does not yet exist online, and by exploring alternative architectures to transformers. The field must also avoid over-fixation on current trends (brute-force scaling) and foster unconventional collaborations, especially between academia and industry, to drive fundamental change.

It is important to emphasise that ethical considerations, safety and alignment of generative models with societal values should be foundational to further adoption.7 It is also important to keep technical progress aligned with societal needs and to raise critical questions about long-term future directions of the field.8

Future of generative AI - Anticipation Scores

The Anticipation Potential of a research field is determined by the capacity for impactful action in the present, considering possible future transformative breakthroughs in a field over a 25-year outlook. A field with a high Anticipation Potential, therefore, combines the potential range of future transformative possibilities engendered by a research area with a wide field of opportunities for action in the present. We asked researchers in the field to anticipate:

  1. The uncertainty related to future science breakthroughs in the field
  2. The transformative effect anticipated breakthroughs may have on research and society
  3. The scope for action in the present in relation to anticipated breakthroughs.

This chart represents a summary of their responses to each of these elements, which when combined, provide the Anticipation Potential for the topic. See methodology for more information.