Global Map

Human-robot interaction
Comment
Stakeholder Type

Human-robot interaction

1.4.4

Sub-Field

Human-robot interaction

Most industrial robots today are deliberately isolated from workers due to safety concerns. But ideally, we would like robots to work seamlessly alongside, or even in coordination with, humans.

Future Horizons:

×××

5-yearhorizon

Cobots enter the workplace

Human-aware navigation is solved, making it possible for robots to be physically integrated into everyday life. LLMs allow people to have more complex and intuitive interactions with robots. Initial prototypes of robots with basic pain and empathy systems are developed. Collaborative robots become more common in the workplace, though only in the most routine jobs. Avatars, remote representations of real people, enable wider distribution of highly skilled services such as healthcare.

10-yearhorizon

Robots start to read human intention

Autonomous robots now have a rudimentary ability to understand human intentions and adapt to them. They also grasp basic social etiquette, such as turn-taking in conversations. Collaborative robots are able to take on more complex jobs when paired with a human expert who can guide the machine. There is increased adoption of empathetic robots in caregiving, education and companionship roles.

25-yearhorizon

Humans and robots adapt to each other

Humans and robots are able to interact seamlessly and work side-by-side in most environments. Robots have developed sophisticated social intelligence and self-awareness, able to understand and respond to human emotions, while humans have adapted to the ways in which robotic “cognition” is different from the way humans think.

At the most basic level, there has been progress on “human-aware” navigation algorithms that allow robots to safely occupy the same space as people.38C. Mavrogiannis‘Core Challenges of Social Robot Navigation: A Survey’

ACM Transactions on Human-Robot Interaction, Vol. 12, p.36.
Collaborative robots, or “cobots”, are also capable of simple interactions like object handovers.39V. Ortenzi‘Object Handovers: A Review for Robotics’
IEEE Transactions on Robotics, Vol. 37, p.1855.
But more sophisticated forms of human interaction are facilitated by deep reservoirs of implicit knowledge about social etiquette, and efforts to imbue this understanding and corresponding behaviour in machines remain rudimentary.

LLMs present a promising opportunity to chip away at these challenges by allowing humans to interface with robots through natural language.40C. Zhang‘Large Language Models for Human–Robot Interaction: A Review’

Biomimetic Intelligence and Robotics, Vol. 3, p.100131.
There is even tentative evidence that they have some ability to model the mental states of humans.41J. Strachan‘Testing Theory of Mind in Large Language Models and Humans’
Nature Human Behaviour, Vol. 8, p.1285.
The outputs of these models can be inconsistent and unreliable though, which makes them unsafe to deploy in many real-world situations.

Understanding how humans interact with robots, both physically and psychologically, and what their expectations of the technology are, is also crucial.42D. Zhang‘Human–Robot Interaction: Control, Analysis, and Design’

Better understanding of how integrating robots into groups of humans impacts social dynamics is also needed.43A. Rosenthal-von‘Social Dynamics in Human-Robot Groups – Possible Consequences of Unequal Adaptation to Group Members Through Machine Learning in Human-Robot Groups’
Artificial Intelligence in HCI, p.396.
Given the prevalence of bias in AI training data, there is a risk that robots could replicate existing patterns of discrimination.44T. Hitron‘Implications of AI Bias in HRI: Risks (and Opportunities) When Interacting with a Biased Robot’
Proceedings of the 2023 ACM/IEEE International Conference on Human-Robot Interaction, Vol. 83, p.2023.
Developing socially adept robots which feel empathy, exhibit moral reasoning and act accordingly may also require them to be able to feel pain, raising many ethical questions.45M. Asada‘Artificial Pain May Induce Empathy, Morality, and Ethics in the Conscious Mind of Robots’
Philosophies, Vol. 4, p.38.
Establishing legal guidelines and frameworks to address the rights and responsibilities of autonomous robots will be required.

Human-robot interaction - Anticipation Scores

The Anticipation Potential of a research field is determined by the capacity for impactful action in the present, considering possible future transformative breakthroughs in a field over a 25-year outlook. A field with a high Anticipation Potential, therefore, combines the potential range of future transformative possibilities engendered by a research area with a wide field of opportunities for action in the present. We asked researchers in the field to anticipate:

  1. The uncertainty related to future science breakthroughs in the field
  2. The transformative effect anticipated breakthroughs may have on research and society
  3. The scope for action in the present in relation to anticipated breakthroughs.

This chart represents a summary of their responses to each of these elements, which when combined, provide the Anticipation Potential for the topic. See methodology for more information.