1.1.1. Deeper Machine Learning
Download PDF
1.1.1. Deeper Machine Learning
Use the future to build the present
Deeper Machine Learning
Comment
Stakeholder Type
1.1Advanced AI1.2QuantumRevolution1.3UnconventionalComputing1.4AugmentedReality1.5CollectiveIntelligence2.1CognitiveEnhancement2.2HumanApplicationsof GeneticEngineering2.3HealthspanExtension2.4ConsciousnessAugmentation2.5Organoids2.6FutureTherapeutics3.1Decarbonisation3.2EarthSystemsModelling3.3FutureFoodSystems3.4SpaceResources3.5OceanStewardship3.6SolarRadiationModification3.7InfectiousDiseases4.1Science-basedDiplomacy4.2Advancesin ScienceDiplomacy4.3Foresight,Prediction,and FuturesLiteracy4.4Democracy-affirmingTechnologies5.1ComplexSystemsScience5.2Futureof Education5.3Future Economics,Trade andGlobalisation5.4The Scienceof theOrigins of Life5.5SyntheticBiology
1.1Advanced AI1.2QuantumRevolution1.3UnconventionalComputing1.4AugmentedReality1.5CollectiveIntelligence2.1CognitiveEnhancement2.2HumanApplicationsof GeneticEngineering2.3HealthspanExtension2.4ConsciousnessAugmentation2.5Organoids2.6FutureTherapeutics3.1Decarbonisation3.2EarthSystemsModelling3.3FutureFoodSystems3.4SpaceResources3.5OceanStewardship3.6SolarRadiationModification3.7InfectiousDiseases4.1Science-basedDiplomacy4.2Advancesin ScienceDiplomacy4.3Foresight,Prediction,and FuturesLiteracy4.4Democracy-affirmingTechnologies5.1ComplexSystemsScience5.2Futureof Education5.3Future Economics,Trade andGlobalisation5.4The Scienceof theOrigins of Life5.5SyntheticBiology

Sub-Field:

1.1.1Deeper Machine Learning

    Many of the advances in deep learning have involved humans laboriously labelling vast amounts of data for the algorithms to be trained on. More recently though, a new paradigm known as self-supervised learning has accelerated progress. It relies on a new kind of neural network called a transformer, which is able to generate its own labels from unlabelled data and therefore train on far more data than before.9

    This has led to rapid improvements in AI language and coding capabilities and is also showing promise in other domains like vision.1011 So far, transformer performance scales reliably with model size, and the largest systems demonstrate emergent capabilities that they were not explicitly trained for, such as creativity and limited reasoning.12 They have also been applied to scientific problems such as protein-folding with considerable success.13 However, it is still possible that the current “scale” approach is learning mainly from a memorisation-based approach, as it its performance is much less impressive in tasks having to do with mathematics and logic. There is a significant need to make trained neural networks perform better on reasoning benchmarks, and new methodologies may be required there

    A further concern is that the imperative to build ever larger models means that cutting-edge AI research is increasingly accessible only to well-funded private labs. Because these models are statistical in nature, they also readily learn biases from training data and in some cases confidently “hallucinate” facts that are not true.14 More fundamentally, these models have no memory of their previous actions and their capabilities are baked-in at training, which means they are unable to learn continuously from their interactions. There is significant debate within the field as to whether these capabilities will emerge with greater scale, can be built in explicitly, or whether we will need to move onto new architectures. Part of the problem is that the theory of machine learning is still far behind the practice.

    Future Horizons:

    ×××

    5-yearhorizon

    Deep learning models grow in scale

    Further scaling of deep learning models leads to performance that is increasingly indistinguishable from humans in language and vision tasks. Widespread deployment by businesses reshapes significant sections of the economy, such as customer service, content generation and programming; the models are significant assets, assisting a wide variety of human task performance. Most countries adopt AI regulations to limit the potential negative impacts of AI such as bias, misinformation and privacy invasion.

    10-yearhorizon

    Workforce disruption necessitates radical intervention from policymakers

    Deep learning systems outperform human knowledge workers in a wide-range of professions. The resultant workforce disruption requires policymakers to rethink employment and wealth distribution policies. Two currently open possibilities about exponential increases in model size are resolved. One is whether they outpace hardware improvements and data availability, leading to a plateau in scaling efforts, and prompting renewed innovation in AI architectures. The other is whether further scaling of model size leads to the emergence of missing ingredients such as episodic memory and in-context learning that makes performance even more human-like — or better-than-human.

    25-yearhorizon

    Deep learning’s influence is ubiquitous in daily human life

    Given current acceleration, at this timescale predictions about AI have become incredibly difficult. If scaling laws hold, then AI is likely to reach superhuman capabilities in all domains. In this scenario, it is also likely to become self-improving, leading to runaway progress that is impossible to forecast — and potentially dangerous, requiring urgent policy decisions about AI’s agency and responsibility. However, it remains possible that there are fundamental barriers that scaling alone will not resolve; in this case, deep learning will still be a powerful tool that is able to match human performance on many tasks, but efforts to build generally intelligent AI will refocus on combing it with alternative AI approaches.

    Deeper Machine Learning - Anticipation Scores

    How the experts see this field in terms of the expected time to maturity, transformational effect across science and industries, current state of awareness among stakeholders and its possible impact on people, society and the planet. See methodology for more information.

    GESDA Best Reads and Key Resources