Further down the line, AR glasses will be able to integrate complex virtual objects into real world scenes. However, doing so in an intuitive and seamless way will require significant breakthroughs in 3D scene reconstruction18 and low latency graphics to improve realism.
A crucial aspect will be innovations in user experience19 — how to present virtual content to people in ways that are both intuitive and powerful. This will require some standardisation in interfaces so that people can seamlessly switch between AR apps. Always-on AR will require careful design to ensure that users aren't overwhelmed with information or notifications, and that it enhances their cognition without being distracting, obtrusive or addictive.
To make this a reality AR will have to become adaptive, using AI to analyse data from physiological sensors and cameras to understand both the user and their environment.20 This will require breakthroughs in areas like continual learning — so that AI can update models of the user on-the-fly — and activity recognition, so that AR devices can understand the user's context and anticipate their needs. Core AI functionality like image recognition or translation will be served by large, shared models accessed over the cloud, but AI responsible for learning about the user's preferences will need to run on AR devices in a decentralised fashion.