Dagstuhl-Seminar 24151
Methods and Tools for the Engineering and Assurance of Safe Autonomous Systems
( 07. Apr – 12. Apr, 2024 )
Permalink
Organisatoren
- Ignacio J. Alvarez (Intel - Hillsboro, US)
- Philip Koopman (Carnegie Mellon University - Pittsburgh, US)
- Mario Trapp (TU München, DE)
- Elena Troubitsyna (KTH Royal Institute of Technology - Stockholm, SE)
Kontakt
- Andreas Dolzmann (für wissenschaftliche Fragen)
- Susanne Bach-Bernhard (für administrative Fragen)
Gemeinsame Dokumente
- Dagstuhl Materials Page (Use personal credentials as created in DOOR to log in)
The examples of modern autonomous systems include self-driving cars, UAV (drones), underwater vehicles, various industrial and home service robots. In general, autonomous systems are intended to operate without human intervention over prolonged time periods, perceive their operating environment and adapt to internal and external changes.
For example, a self-driving car gathers information from camera and lidar to detect, e.g., pedestrians on the road and plan collision avoidance maneuvers, slowing down or breaking, i.e., avoid hazards. The perception functions process the inputs of various sensors and generate the internal model of the operating environment. By relying on this model, the decision functions plan and execute the actions required to achieve the goals of the mission. In general, they follow the generic “sense-understand-decide-act” behavioral pattern, which is also traditionally adopted in robotics.
Both sensing and decision making usually rely on Artificial Intelligence (AI), in particular Machine Learning (ML). While AI and ML algorithms have already been used in robotics for several decades, their use in safety-critical systems is fairly new and currently not appropriately addressed by safety engineering neither from technological, nor from organizational and legal points of view.
The problem of safe AI has received a significant amount of research and industrial attention over the last few years, but there has been a divergence in the approaches taken by the safety and the ML communities. Moreover, it has become clear that the safety assurance problems cannot be resolved by improving the ML algorithms alone. Hence, the research communities should consolidate their efforts in creating methods and tools enabling a holistic approach to safety of autonomous systems.
This motivated the topic of our Dagstuhl Seminar – exploring the problem of engineering and assuring safety of autonomous systems from an interdisciplinary perspective. A group of experts from avionics, automotive, machine learning, simulation, verification and validation and safety engineering reviewed the current academic state-of-the-art, industry practices and standardization to determine the latest achievement and challenges in developing and safety assurance for autonomous systems over a broad range of technological, organizational, ethical and legal perspectives.
As a result, the discussions of achievements and challenges in developing and assuring safety of autonomous systems spanned over a broad range of technological, organizational, ethical and legal topics.
Organisation of the seminar
The seminar brought together researchers and practitioners from different disciplines and application domains. Since, currently, the innovation in autonomous systems is strongly led by industry, a significant number of participants were industrial engineers, who not only shared their best practices but also identified unsolved research problems. In constructive debates, we discussed the results of applying and experimenting with various techniques for engineering safe autonomous systems and identified open research challenges.
To facilitate an open discussion between the participants, and analyze the problem of engineering safe autonomous systems from different points of view, before the seminar, we identified the following general discussion themes:
- Role of formal methods in engineering and assurance of safe autonomous systems
- Regulatory, assurance and standards for safety-critical autonomous systems
- Safety of AI-based system versus normal technical system safety
- Safety and security interactions
- Risk acceptance for autonomous systems
This report presents the summaries of the discussions focused on the specific topics within these themes.
We would like to acknowledge the supporting contributors – the session chairs and scribes that helped to collect the information for this report: Magnus Albert (SICK AG – Waldkirch, DE), Ensar Becic (National Transportation Savety Board, US), Nicolas Becker (Stellantis France – Poissy, FR), Simon Burton (Gerlingen, DE), Radu Calinescu (University of York, GB), Betty H. C. Cheng (Michigan State University – East Lansing, US), Krzysztof Czarnecki (University of Waterloo, CA), Niels De Boer (Nanyang TU – Singapore, SG), Lydia Gauerhof (Bosch Center for AI – Renningen, DE), Jérémie Guiochet (LAAS – Toulouse, FR), Hans Hansson (Mälardalen University – Västerås, SE), Aaron Kane (Edge Case Research – Pittsburgh, US), Lars Kunze (University of Oxford, GB), Jonas Nilsson (NVIDIA Corp. – Santa Clara, US), Nick Reed (Reed Mobility – Wokingham, GB), Jan Reich (Fraunhofer IESE – Kaiserslautern, DE), Martin Rothfelder (Siemens – München, DE), Philippa Ryan (University of York, GB), Fredrik Sandblom (Zenseact AB – Gothenburg, SE), Stefano Tonetta (Bruno Kessler Foundation – Trento, IT), Kim Wasson (Joby Aviation – Santa Cruz, US), and William H. Widen (University of Miami – Coral Gables, US). In the spirit of Chatham House Rules that prevailed in the meeting, we are not attributing any particular written text to any particular person.
Autonomous systems are intended to operate without human intervention over prolonged time periods, perceive their operating environment, and adapt to changes – while pursuing defined goals or generating new ones. The perception functions process the inputs of various sensors and generate an internal model of the operating environment. By relying on this model, the decision functions plan and execute the actions required to achieve the goals of the mission.
To achieve safety for an autonomous system, the engineers should ensure that the perception functions can sufficiently accurately build the model of the environment, i.e., perception and establishing a context for prediction are reliable. They also seek to ensure that the planned actions are safe, i.e., decisions do not result in actions that endanger humans or other agents in the operating environment.
Both sensing and decision usually rely on Artificial Intelligence (AI), in particular Machine Learning (ML). The problem of safe AI has received a significant amount of research and industrial attention over the last few years, but there has been a divergence in the approaches taken by the safety and the ML communities. Moreover, it has become clear that the safety assurance problems cannot be resolved by improving the ML algorithms alone. Hence, the research communities should collaborate in creating methods and tools enabling a holistic approach to safety of autonomous systems. It is increasingly acknowledged that there needs to be work on ML methods, e.g. explainability to make algorithms transparent, predictable updates (learning without forgetting), and other areas. This should be complemented by a systems approach enabling safe autonomy through an integration of dedicated architectural, modelling, verification and validation as well as assurance methods.
Clearly, the engineering and assurance of safe autonomous systems require more fundamental research work that goes well beyond efforts of near-term industry deployment. In particular, we should address such open research problems as building a robust world model; creating resilient architectures enabling graceful degradation and fail-operational behavior; making safety assurances for high-consequence long-tail events; and establishing ways to measure and regulate safety for learning-enabled systems. To develop a holistic view on the safety of autonomous systems, we are planning to discuss, systematize and integrate these problems during our seminar.
This Dagstuhl Seminar aims at bringing together researchers and practitioners from safety engineering, systems and software engineering, modelling, verification and validation, machine learning, robotics, and autonomous systems to identify the state-of-the-art and key research and industrial challenges in engineering safe autonomous systems and defining the research roadmap for safe autonomy.
- Magnus Albert (SICK AG - Waldkirch, DE)
- Ignacio J. Alvarez (Intel - Hillsboro, US) [dblp]
- Claus Bahlmann (Siemens Mobility GmbH - Berlin, DE)
- Ensar Becic (National Transportation Savety Board, US)
- Nicolas Becker (Stellantis France - Poissy, FR)
- Simon Burton (Gerlingen, DE) [dblp]
- Radu Calinescu (University of York, GB) [dblp]
- Betty H. C. Cheng (Michigan State University - East Lansing, US) [dblp]
- Krzysztof Czarnecki (University of Waterloo, CA) [dblp]
- Niels De Boer (Nanyang TU - Singapore, SG)
- Francesca Favaro (Waymo LLC - Mountain View, US)
- Lydia Gauerhof (Bosch Center for AI - Renningen, DE)
- Mallory Graydon (NASA - Hampton, US)
- Jérémie Guiochet (LAAS - Toulouse, FR) [dblp]
- Hans Hansson (Mälardalen University - Västerås, SE)
- Fuyuki Ishikawa (National Institute of Informatics - Tokyo, JP) [dblp]
- Aaron Kane (Edge Case Research - Pittsburgh, US)
- Lennart Kilian (Siemens - München, DE)
- Jörg Koch (Renesas Electronics Europe - Düsseldorf, DE)
- Philip Koopman (Carnegie Mellon University - Pittsburgh, US) [dblp]
- Lars Kunze (University of Oxford, GB) [dblp]
- Jonas Nilsson (NVIDIA Corp. - Santa Clara, US)
- Ganesh J. Pai (KBR, Inc. & NASA Ames - Moffett Field, US) [dblp]
- Nick Reed (Reed Mobility - Wokingham, GB)
- Jan Reich (Fraunhofer IESE - Kaiserslautern, DE)
- Martin Rothfelder (Siemens - München, DE)
- Philippa Ryan (University of York, GB) [dblp]
- Fredrik Sandblom (Zenseact AB - Gothenburg, SE)
- Tiziano Santilli (Gran Sasso Science Institute - L'Aquila, IT)
- Jan Stellet (Robert Bosch GmbH - Stuttgart, DE)
- Reinhard Stolle (Fraunhofer IKS - München, DE)
- Stefano Tonetta (Bruno Kessler Foundation - Trento, IT)
- Mario Trapp (TU München, DE) [dblp]
- Elena Troubitsyna (KTH Royal Institute of Technology - Stockholm, SE) [dblp]
- Kim Wasson (Joby Aviation - Santa Cruz, US)
- Alan Wassyng (McMaster University - Hamilton, CA) [dblp]
- William H. Widen (University of Miami - Coral Gables, US)
- Rafael Zalman (Infineon Technologies AG - Neubiberg, DE)
Klassifikation
- Artificial Intelligence
- Logic in Computer Science
- Software Engineering
Schlagworte
- safety-critical autonomous systems
- software engineering
- simulation-based verification and validation
- safety assurance
- AI