TOP
Suche auf der Schloss Dagstuhl Webseite
Sie suchen nach Informationen auf den Webseiten der einzelnen Seminare? - Dann:
Nicht fündig geworden? - Einige unserer Dienste laufen auf separaten Webseiten mit jeweils eigener Suche. Bitte beachten Sie folgende Liste:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminare
Innerhalb dieser Seite:
Externe Seiten:
  • DOOR (zum Registrieren eines Dagstuhl Aufenthaltes)
  • DOSA (zum Beantragen künftiger Dagstuhl Seminare oder Dagstuhl Perspektiven Workshops)
Publishing
Innerhalb dieser Seite:
Externe Seiten:
dblp
Innerhalb dieser Seite:
Externe Seiten:
  • die Informatik-Bibliographiedatenbank dblp


Dagstuhl-Seminar 23371

Roadmap for Responsible Robotics

( 10. Sep – 15. Sep, 2023 )

(zum Vergrößern in der Bildmitte klicken)

Permalink
Bitte benutzen Sie folgende Kurz-Url zum Verlinken dieser Seite: https://www.dagstuhl.de/23371

Organisatoren

Koordinatoren
  • Anna Dobrosovestnova (TU Wien, AT)
  • Nick Schuster (Australian National University - Canberra, AU)

Kontakt

Gemeinsame Dokumente


Programm

Summary

The ISO 8373 standard ("Robots and Robotic Devices - Vocabulary") defines a robot as "an actuated mechanism programmable in two or more axes moving within its environment, to perform intended tasks". Aligned with this definition, we consider "robotics" to cover a wide range of devices - e.g. vehicles, probes, drones, industrial devices, and personal robots - as well as the complex sociotechnical processes surrounding the development and deployment of such systems. Given that robotic systems are increasingly capable of acting without direct human oversight, and that they're being deployed in an increasing variety of contexts, a range of concerns beyond technical reliability emerge. Many authors, across a variety of disciplines, have pointed to the need for "responsibility" in robotic systems. However, while it is popular to highlight this as a target, there is no agreed route to achieving responsible robotics. In addition, there is sometimes even little agreement on what responsibility here comprises.

The aim of this Dagstuhl Seminar was to identify the key components of responsibility in this context and then, crucially, provide a roadmap for achieving responsible robotics in practice. By doing so, the seminar contributed to the ongoing efforts established with the Roboethics Roadmap put forth in January 2007 by the European Robotics Research Network (EURON), the European Union's REELER, SIENNA, and TECHETHOS projects, and the UK's RoboTIPS project, among others.

In the original proposal of the seminar, four themes commonly associated with responsible robotics were emphasized: trust, fairness, reliability, and understandability. In the course of the seminar, however, the participants - comprising philosophers, engineers, roboticists, cognitive scientists, and industry representatives - identified a broader range of concerns. Firstly, some discussions focused on what responsibility means from different disciplinary perspectives and how these apply to the development, deployment, use, and disposal of robots. In these discussions, it was emphasized that the very term "responsibility" is ambiguous in philosophy and law. The ambiguity and the complexity of the term is, however, rarely reflected in the debates on responsibility in the context of AI and robotics. Referring to [1], responsibility gaps in sociotechnical systems were discussed. We converged on an understanding of responsible robotics as broadly capturing the idea that various parties involved in development, deployment, integration, and maintenance of robots need to be acting in a responsible manner. This involves behaving ethically in their various roles, building ethically sensitive robots, and ultimately taking responsibility for how robotics as a field progresses and how robots are used. This includes "role responsibility", relating to specific functions in robotics; "professional responsibility", which covers obligations in the robotics profession; "moral responsibility", involving ethical decision-making and anticipation of consequences; "legal responsibility", pertaining to compliance with relevant laws and regulations; "social responsibility," regarding the broader impacts of robotic systems on human societies; and "environmental responsibility," regarding their impacts on the natural environment.

As an important step to ensure responsible robotics, discussions considered the diverse roles and responsibilities of key stakeholders, including businesses, universities, governments, users, and others who stand to affect, or be affected by, robotic systems. Specifically, it was noted that universities play a crucial role in shaping the professionals who design, engineer, and operate robotic systems. Engineering and design curricula should thus include modules on responsible innovation, safety standards, and the potential consequences of misuse. This could be done by intensifying the dialogue and collaborations with other disciplines, in particular humanities and social sciences, following promising initiatives such as Embedded EthiCS. To align robotics with ethical standards, businesses in turn must conduct thorough risk assessments, addressing potential misuses and implementing safeguards in their products. For example, in the case of AI-based robotic systems, providers may rely on existing risk management frameworks such as the one recently developed by the National Institute of Standards and Technology for AI system (https://www.nist.gov/itl/ai-risk-management-framework). Additionally, they should provide comprehensive user manuals, conduct user training programs, and actively collaborate with regulatory bodies to establish industry-wide standards. Transparent communication about the capabilities and limitations of their products is essential to ensure that users have a clear understanding of how to responsibly engage with robotic technologies. Furthermore, governments play a pivotal role in creating and enforcing regulations that govern the use of robotic products and services. They must collaborate with industry experts to establish ethical guidelines, safety standards, and legal frameworks. Regulatory bodies should continuously update these frameworks to keep pace with technological advancements. Furthermore, governments should invest in public awareness campaigns to educate citizens about the benefits and risks of robots, mitigating the potential for misuse or misunderstanding.

Discussions also emphasized that an extended definition of responsibility, encompassing not only technical but also social and political considerations, requires a similarly expansive understanding of trust, fairness, reliability, and understandability as well as the addition of other normative concepts. To address this, other potentially relevant concepts were identified through an iterative voting exercise. The final list included: dignity, the inherent worth of each member of the moral community who stands to be impacted by robotic systems; autonomy, enabling human beings to act in accordance with their own interests and aspirations; privacy, empowering people to protect and share sensitive information about themselves as they see fit; safety, protecting the various aspects of physical and emotional well-being; trust, ensuring that people have good reason to believe that robotic systems are aligned with their legitimate interests; justice/fairness, making the impacts of robotic systems acceptable to all who stand to be affected by them; accountability, ensuring that the right agents are held to account for adverse outcomes; and sustainability, regarding the impacts of robotic systems on the natural world and future generations. It was not our objective to generate an exhaustive list. Rather, the list reflected the principle concerns that emerged from discussion of current and near-future uses and capabilities of robotic systems.

In summary, apart from the group level discussions, 4 working groups were held: These included working groups on:

  • Fairness
  • Trust
  • Why robots require different considerations?
  • Predictability

References

  1. Santoni de Sio, Filippo, and Giulio Mecacci. Four responsibility gaps with artificial intelligence: Why they matter and how to address them. Philosophy & Technology, 34: 1057-1084, 2021.
Copyright Michael Fisher, Seth Lazar, Marija Slavkovik, and Nick Schuster

Motivation

Responsible Robotics is an appealing goal. It captures the idea of developing and deploying physical autonomous systems for the benefit of both individuals and society. However, although a popular target, there are, as yet, no robustly reliable routes to achieving Responsible Robotics, and indeed a relative paucity of compelling pictures of precisely what “responsibility” here comprises.

The aim of this Dagstuhl Seminar is to identify the key components of responsibility in this context and then, crucially, describe how we might work towards Responsible Robotics in practice. We focus on four themes associated with Responsible Robotics (trust, fairness, predictability, understandability), which we will refine and extend as necessary. Understanding the interaction between these elements will be crucial to many advanced uses of autonomous robots especially when near humans. Many commentators on social robotics have confined their attention to naming concerns. Our seminar will go beyond criticism in two ways: it will aim to articulate attractive goals to aim at and develop tractable pathways to their implementation in real-world systems.

Trust. The basic understanding of trust relations between people and technology is often best described in terms of reliance as a property of the robot: we want to be able to trust technological systems, in the sense that we can rely on them not to work against our interests. However, Social Robotics significantly increases the complexity of this trust relation, opening up more human-like dimensions of both our trust in robots, and their perceived trustworthiness. Exploring human-robot trust relations can be useful in Responsible Robotics to help translate and transfer requirements into system development.

Fairness. Within AI Ethics, fairness is seen as both a value to be aimed at in socio-technical systems that use AI and as a property of algorithms. There are two issues of fairness that are of main concern: fairness of representation and fairness of allocation. Both have been thoroughly examined in the context of machine learning, but relatively little explored for autonomous robotic systems. Our seminar will consider how to understand the value of fairness in Social Robotics, as well as what is fairness as a property of social robots.

Predictability. Reliability as a property of the robotic system, is one of the most empirically studied trust concepts in human-robot relations. However, we not only require reliability, but predictability both in terms of (a) its decision-making processes and (b) its future behaviour. If truly autonomous, we need clarity in exactly why decisions are made by the robots as well as how reliably they are made. We also address the changes that occur after deployment of a system, such as changes in context, capability, and effectiveness, and how these can affect not only predictability and reliability, but ethics and responsibility.

Understandability. A cornerstone of trust is transparency - it is much harder to use, and especially trust, robotic systems that have opaque decision-making processes. Transparency is widely recognised as being key but remains just the foundation. We require transparency, but also understandability in interactions with our robotic systems. In the seminar we intend to engage in untangling the different concepts involved in understandability and discuss how each of the necessary components, such as transparency and explainability, can be measurably attained in the case of Social Robotics.

Research issues, concerning both clarification and interaction of trust, fairness, predictability, and understandability and the practical routes to ensuring these within Responsible Robotics, will involve a collaborative effort between computer scientists, roboticists, mathematicians, psychologists and philosophers.

Copyright Michael Fisher, Seth Lazar, Marija Slavkovik, and Astrid Weiss

Teilnehmer

Verwandte Seminare
  • Dagstuhl-Seminar 16222: Engineering Moral Agents - from Human Morality to Artificial Morality (2016-05-29 - 2016-06-03) (Details)
  • Dagstuhl-Seminar 19171: Ethics and Trust: Principles, Verification and Validation (2019-04-22 - 2019-04-26) (Details)

Klassifikation
  • Artificial Intelligence
  • Computers and Society
  • Robotics

Schlagworte
  • Robotics
  • Responsibility
  • Trust
  • Fairness
  • Predictability
  • Understandability
  • Ethics