TOP
Search the Dagstuhl Website
Looking for information on the websites of the individual seminars? - Then please:
Not found what you are looking for? - Some of our services have separate websites, each with its own search option. Please check the following list:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminars
Within this website:
External resources:
  • DOOR (for registering your stay at Dagstuhl)
  • DOSA (for proposing future Dagstuhl Seminars or Dagstuhl Perspectives Workshops)
Publishing
Within this website:
External resources:
dblp
Within this website:
External resources:
  • the dblp Computer Science Bibliography


Dagstuhl Seminar 23371

Roadmap for Responsible Robotics

( Sep 10 – Sep 15, 2023 )

(Click in the middle of the image to enlarge)

Permalink
Please use the following short url to reference this page: https://www.dagstuhl.de/23371

Organizers

Coordinators
  • Anna Dobrosovestnova (TU Wien, AT)
  • Nick Schuster (Australian National University - Canberra, AU)

Contact


Schedule

Press Room

Summary

The ISO 8373 standard ("Robots and Robotic Devices - Vocabulary") defines a robot as "an actuated mechanism programmable in two or more axes moving within its environment, to perform intended tasks". Aligned with this definition, we consider "robotics" to cover a wide range of devices - e.g. vehicles, probes, drones, industrial devices, and personal robots - as well as the complex sociotechnical processes surrounding the development and deployment of such systems. Given that robotic systems are increasingly capable of acting without direct human oversight, and that they're being deployed in an increasing variety of contexts, a range of concerns beyond technical reliability emerge. Many authors, across a variety of disciplines, have pointed to the need for "responsibility" in robotic systems. However, while it is popular to highlight this as a target, there is no agreed route to achieving responsible robotics. In addition, there is sometimes even little agreement on what responsibility here comprises.

The aim of this Dagstuhl Seminar was to identify the key components of responsibility in this context and then, crucially, provide a roadmap for achieving responsible robotics in practice. By doing so, the seminar contributed to the ongoing efforts established with the Roboethics Roadmap put forth in January 2007 by the European Robotics Research Network (EURON), the European Union's REELER, SIENNA, and TECHETHOS projects, and the UK's RoboTIPS project, among others.

In the original proposal of the seminar, four themes commonly associated with responsible robotics were emphasized: trust, fairness, reliability, and understandability. In the course of the seminar, however, the participants - comprising philosophers, engineers, roboticists, cognitive scientists, and industry representatives - identified a broader range of concerns. Firstly, some discussions focused on what responsibility means from different disciplinary perspectives and how these apply to the development, deployment, use, and disposal of robots. In these discussions, it was emphasized that the very term "responsibility" is ambiguous in philosophy and law. The ambiguity and the complexity of the term is, however, rarely reflected in the debates on responsibility in the context of AI and robotics. Referring to [1], responsibility gaps in sociotechnical systems were discussed. We converged on an understanding of responsible robotics as broadly capturing the idea that various parties involved in development, deployment, integration, and maintenance of robots need to be acting in a responsible manner. This involves behaving ethically in their various roles, building ethically sensitive robots, and ultimately taking responsibility for how robotics as a field progresses and how robots are used. This includes "role responsibility", relating to specific functions in robotics; "professional responsibility", which covers obligations in the robotics profession; "moral responsibility", involving ethical decision-making and anticipation of consequences; "legal responsibility", pertaining to compliance with relevant laws and regulations; "social responsibility," regarding the broader impacts of robotic systems on human societies; and "environmental responsibility," regarding their impacts on the natural environment.

As an important step to ensure responsible robotics, discussions considered the diverse roles and responsibilities of key stakeholders, including businesses, universities, governments, users, and others who stand to affect, or be affected by, robotic systems. Specifically, it was noted that universities play a crucial role in shaping the professionals who design, engineer, and operate robotic systems. Engineering and design curricula should thus include modules on responsible innovation, safety standards, and the potential consequences of misuse. This could be done by intensifying the dialogue and collaborations with other disciplines, in particular humanities and social sciences, following promising initiatives such as Embedded EthiCS. To align robotics with ethical standards, businesses in turn must conduct thorough risk assessments, addressing potential misuses and implementing safeguards in their products. For example, in the case of AI-based robotic systems, providers may rely on existing risk management frameworks such as the one recently developed by the National Institute of Standards and Technology for AI system (https://www.nist.gov/itl/ai-risk-management-framework). Additionally, they should provide comprehensive user manuals, conduct user training programs, and actively collaborate with regulatory bodies to establish industry-wide standards. Transparent communication about the capabilities and limitations of their products is essential to ensure that users have a clear understanding of how to responsibly engage with robotic technologies. Furthermore, governments play a pivotal role in creating and enforcing regulations that govern the use of robotic products and services. They must collaborate with industry experts to establish ethical guidelines, safety standards, and legal frameworks. Regulatory bodies should continuously update these frameworks to keep pace with technological advancements. Furthermore, governments should invest in public awareness campaigns to educate citizens about the benefits and risks of robots, mitigating the potential for misuse or misunderstanding.

Discussions also emphasized that an extended definition of responsibility, encompassing not only technical but also social and political considerations, requires a similarly expansive understanding of trust, fairness, reliability, and understandability as well as the addition of other normative concepts. To address this, other potentially relevant concepts were identified through an iterative voting exercise. The final list included: dignity, the inherent worth of each member of the moral community who stands to be impacted by robotic systems; autonomy, enabling human beings to act in accordance with their own interests and aspirations; privacy, empowering people to protect and share sensitive information about themselves as they see fit; safety, protecting the various aspects of physical and emotional well-being; trust, ensuring that people have good reason to believe that robotic systems are aligned with their legitimate interests; justice/fairness, making the impacts of robotic systems acceptable to all who stand to be affected by them; accountability, ensuring that the right agents are held to account for adverse outcomes; and sustainability, regarding the impacts of robotic systems on the natural world and future generations. It was not our objective to generate an exhaustive list. Rather, the list reflected the principle concerns that emerged from discussion of current and near-future uses and capabilities of robotic systems.

In summary, apart from the group level discussions, 4 working groups were held: These included working groups on:

  • Fairness
  • Trust
  • Why robots require different considerations?
  • Predictability

References

  1. Santoni de Sio, Filippo, and Giulio Mecacci. Four responsibility gaps with artificial intelligence: Why they matter and how to address them. Philosophy & Technology, 34: 1057-1084, 2021.
Copyright Michael Fisher, Seth Lazar, Marija Slavkovik, and Nick Schuster

Motivation

Responsible Robotics is an appealing goal. It captures the idea of developing and deploying physical autonomous systems for the benefit of both individuals and society. However, although a popular target, there are, as yet, no robustly reliable routes to achieving Responsible Robotics, and indeed a relative paucity of compelling pictures of precisely what “responsibility” here comprises.

The aim of this Dagstuhl Seminar is to identify the key components of responsibility in this context and then, crucially, describe how we might work towards Responsible Robotics in practice. We focus on four themes associated with Responsible Robotics (trust, fairness, predictability, understandability), which we will refine and extend as necessary. Understanding the interaction between these elements will be crucial to many advanced uses of autonomous robots especially when near humans. Many commentators on social robotics have confined their attention to naming concerns. Our seminar will go beyond criticism in two ways: it will aim to articulate attractive goals to aim at and develop tractable pathways to their implementation in real-world systems.

Trust. The basic understanding of trust relations between people and technology is often best described in terms of reliance as a property of the robot: we want to be able to trust technological systems, in the sense that we can rely on them not to work against our interests. However, Social Robotics significantly increases the complexity of this trust relation, opening up more human-like dimensions of both our trust in robots, and their perceived trustworthiness. Exploring human-robot trust relations can be useful in Responsible Robotics to help translate and transfer requirements into system development.

Fairness. Within AI Ethics, fairness is seen as both a value to be aimed at in socio-technical systems that use AI and as a property of algorithms. There are two issues of fairness that are of main concern: fairness of representation and fairness of allocation. Both have been thoroughly examined in the context of machine learning, but relatively little explored for autonomous robotic systems. Our seminar will consider how to understand the value of fairness in Social Robotics, as well as what is fairness as a property of social robots.

Predictability. Reliability as a property of the robotic system, is one of the most empirically studied trust concepts in human-robot relations. However, we not only require reliability, but predictability both in terms of (a) its decision-making processes and (b) its future behaviour. If truly autonomous, we need clarity in exactly why decisions are made by the robots as well as how reliably they are made. We also address the changes that occur after deployment of a system, such as changes in context, capability, and effectiveness, and how these can affect not only predictability and reliability, but ethics and responsibility.

Understandability. A cornerstone of trust is transparency - it is much harder to use, and especially trust, robotic systems that have opaque decision-making processes. Transparency is widely recognised as being key but remains just the foundation. We require transparency, but also understandability in interactions with our robotic systems. In the seminar we intend to engage in untangling the different concepts involved in understandability and discuss how each of the necessary components, such as transparency and explainability, can be measurably attained in the case of Social Robotics.

Research issues, concerning both clarification and interaction of trust, fairness, predictability, and understandability and the practical routes to ensuring these within Responsible Robotics, will involve a collaborative effort between computer scientists, roboticists, mathematicians, psychologists and philosophers.

Copyright Michael Fisher, Seth Lazar, Marija Slavkovik, and Astrid Weiss

Participants
  • Dejanira Araiza-Illan (Johnson & Johnson - Singapore, SG) [dblp]
  • Kevin Baum (DFKI - Saarbrücken, DE) [dblp]
  • Helen Beebee (University of Leeds, GB)
  • Raja Chatila (Sorbonne University - Paris, FR) [dblp]
  • Sarah Christensen (University of Leeds, GB)
  • Simon Coghlan (The University of Melbourne, AU) [dblp]
  • Emily Collins (University of Manchester, GB) [dblp]
  • Alcino Cunha (University of Minho - Braga, PT & INESC TEC - Porto, PT) [dblp]
  • Kate Devitt (Queensland University of Technology - Brisbane, AU) [dblp]
  • Anna Dobrosovestnova (TU Wien, AT) [dblp]
  • Hein Duijf (LMU München, DE) [dblp]
  • Vanessa Evers (University of Twente - Enschede, NL) [dblp]
  • Michael Fisher (University of Manchester, GB) [dblp]
  • Nico Hochgeschwender (Hochschule Bonn-Rhein-Sieg, DE) [dblp]
  • Nadin Kokciyan (University of Edinburgh, GB) [dblp]
  • Severin Lemaignan (PAL Robotics - Barcelona, ES) [dblp]
  • Sara Ljungblad (University of Gothenburg, SE & Chalmers University of Technology - Göteborg, SE)
  • Martin Magnusson (Örebro University, SE) [dblp]
  • Masoumeh Mansouri (University of Birmingham, GB) [dblp]
  • Michael Milford (Queensland University of Technology - Brisbane, AU) [dblp]
  • AJung Moon (McGill University - Montreal, CA) [dblp]
  • Thomas Michael Powers (University of Delaware - Newark, US) [dblp]
  • Daniel Fernando Preciado Vanegas (Free University of Amsterdam, NL) [dblp]
  • Francisco Javier Rodríguez Lera (University of León, ES) [dblp]
  • Pericle Salvini (EPFL - Lausanne, CH) [dblp]
  • Teresa Scantamburlo (University of Venice, IT) [dblp]
  • Nick Schuster (Australian National University - Canberra, AU)
  • Marija Slavkovik (University of Bergen, NO) [dblp]
  • Ufuk Topcu (University of Texas - Austin, US) [dblp]
  • Andrzej Wasowski (IT University of Copenhagen, DK) [dblp]
  • Yi Yang (KU Leuven, BE) [dblp]

Related Seminars
  • Dagstuhl Seminar 16222: Engineering Moral Agents - from Human Morality to Artificial Morality (2016-05-29 - 2016-06-03) (Details)
  • Dagstuhl Seminar 19171: Ethics and Trust: Principles, Verification and Validation (2019-04-22 - 2019-04-26) (Details)

Classification
  • Artificial Intelligence
  • Computers and Society
  • Robotics

Keywords
  • Robotics
  • Responsibility
  • Trust
  • Fairness
  • Predictability
  • Understandability
  • Ethics