Hence, running models embedded on “intelligent devices” will be imperative for many of the most promising AI applications. However, smaller machines with limited power supplies are unable to support the computing and energy requirements of today’s leading models. This is spurring efforts to develop more efficient chips specialised for embedded applications, and optimisation techniques that can make models smaller, more energy efficient and faster without sacrificing performance.2122
Training these intelligent devices will also require innovations. Federated learning is a promising approach that distributes training over many smaller devices without pooling data centrally, reducing bandwidth requirements and improving privacy.23 Reinforcement learning, in which AI learns to perform a task by repeatedly performing actions and seeing if they maximise a carefully chosen reward, is seen as a promising way to train robots and other automated machines. Doing this is in the real world is slow and costly, though, and collecting enough real-world training data is a significant challenge A new paradigm that involves training models in simulations and then porting them over to devices may present a powerful alternative.24