XR interfaces
Comment
Stakeholder Type

XR interfaces

1.5.2

Sub-Field

XR interfaces

Creating immersive virtual experiences will require technology that is highly adaptive to the user and their needs. While the rich body of evolving research on human-computer interaction provides a solid foundation,31 XR interfaces will have to take account of the unique and different ways that humans interact with their spatial environment in different contexts.32

Future Horizons:

×××

5-yearhorizon

XR experience becomes highly fluid and intuitive

Rapid improvements in the ability to track hands, face and eyes make it possible to pick up almost imperceptible physical cues that indicate the user’s attention, intention and emotional state. This is used to power highly adaptive user interfaces that make XR experiences even more fluid and intuitive than real life.

10-yearhorizon

XR becomes globally available

Hardware advances make lightweight, affordable and energy-efficient devices as common as smartphones. AI embedded in these devices makes it possible to provide context-aware experiences, making real-time environmental adjustments to personalise content to the user’s needs and situation.

25-yearhorizon

Users’ thoughts directly interface with XR

Brain-computer interfaces become the predominant interface for XR, allowing users to control and experience virtual environments via thought alone. This dramatically improves accessibility of XR technology and enables users to visualise, manipulate and inhabit information and environments in ways that are intuitive and deeply personalised.

Many XR headsets today come with handheld controllers, but interaction is becoming increasingly multimodal. On many devices, camera-based gesture recognition and eye-tracking allow users to interact with virtual elements through simple hand movements or by using their visual focus as a cursor. Beyond providing more intuitive control options, the wide range of sensors embedded in leading XR devices can provide powerful insights into the user’s attention, intent and actions.

Machine-learning models can analyse eye-tracking data and ego-centric video captured by XR devices to infer what activity a user is engaged in33,34 or even to infer their intention to interact with both virtual and physical objects.35 Eye-tracking data can also provide a window into the user’s attention and level of engagement.36 When combined with basic physiological monitoring, it can also help to monitor the user’s cognitive load while carrying out tasks in XR.37

By combining these user-centric insights with spatial information, virtual elements and control interfaces can be adapted to the user’s immediate context in real time.38 This can be used to both improve the usability of virtual interfaces39 and overlay helpful information and control options onto real-world objects.40,41 It can even reduce the need for real-world physical interfaces, given the advances in “internet of things” connectivity between XR devices and the physical world. It can also make it possible for XR systems to detect when the user’s ability to interact with specific controls and interfaces is impaired by things like poor lighting, noise or multi-tasking, and adapt accordingly.42

XR interfaces - Anticipation Scores

The Anticipation Potential of a research field is determined by the capacity for impactful action in the present, considering possible future transformative breakthroughs in a field over a 25-year outlook. A field with a high Anticipation Potential, therefore, combines the potential range of future transformative possibilities engendered by a research area with a wide field of opportunities for action in the present. We asked researchers in the field to anticipate:

  1. The uncertainty related to future science breakthroughs in the field
  2. The transformative effect anticipated breakthroughs may have on research and society
  3. The scope for action in the present in relation to anticipated breakthroughs.

This chart represents a summary of their responses to each of these elements, which when combined, provide the Anticipation Potential for the topic. See methodology for more information.