Dagstuhl Seminar 24372
Explainable AI for Sequential Decision Making
( Sep 08 – Sep 11, 2024 )
Permalink
Organizers
- Hendrik Baier (TU Eindhoven, NL)
- Mark T. Keane (University College Dublin, IE)
- Sarath Sreedharan (Colorado State University - Fort Collins, US)
- Silvia Tulli (Sorbonne University - Paris, FR)
- Abhinav Verma (Pennsylvania State University - University Park, US)
Contact
- Andreas Dolzmann (for scientific matters)
- Christina Schwarz (for administrative matters)
Dagstuhl Reports
As part of the mandatory documentation, participants are asked to submit their talk abstracts, working group results, etc. for publication in our series Dagstuhl Reports via the Dagstuhl Reports Submission System.
- Upload (Use personal credentials as created in DOOR to log in)
Shared Documents
- Dagstuhl Materials Page (Use personal credentials as created in DOOR to log in)
- UCD CS academics invited to organise prestigious Dagstuhl seminars - University College Dublin School of Computer Science News, March 28, 2024
As we work with AI and rely on AI for more and more decisions that influence our lives, the research area of explainable AI (XAI) has rapidly developed, with goals such as increasing trust, enhancing collaboration, and enabling transparency in AI. However, to date, the focus of XAI has largely been on explaining the input-output mappings of “black box” models like neural networks, which have been seen as the central problem for the explainability of AI systems. While these models are certainly important, intelligent behavior often extends over time and needs to be explained and understood as such. The challenge of explaining sequential decision-making (SDM), such as that of robots collaborating with humans or software agents engaged in complex ongoing tasks, has only recently gained attention. We may have AIs that can beat us in Go, but can they teach us how to play? We may have search and rescue robots, but can we effectively communicate with them, and coordinate missions with them, in the field?
Initial attempts to make the behavior of SDM algorithms more understandable have recently appeared in different fields such as classical AI planning, reinforcement learning, multiagent systems, or logic-based argumentation – but often focused on ad hoc solutions to specific problems, with an emphasis on area-specific terminology and concepts developed in isolation of other fields. Many of these approaches are also restricted in their scope, for example to explanations of isolated, single actions that do not address the full complexity of SDM; or to summaries of entire agent policies, which are often too high-level to be helpful. To truly trust an AI agent and collaboratively work with it towards human goals, and to increase successful AI adoption and acceptance in many fields from robotics to logistics, and from production planning to smart cities, we need considerable progress on this new field of XAI for SDM (or X-SDM), which we will focus on developing in this Dagstuhl Seminar.
The seminar will focus on under-researched challenges that are unique to, or of particular relevance to, explainability in sequential decision-making settings. We will seek to identify and clarify such challenges, making use of the complementary perspectives of researchers from different communities such as reinforcement learning, planning, recommender systems, or multi-agent systems, which historically use different theoretical foundations and computational approaches. While the participants will form working groups based on their own research interests and priorities, topics to be discussed can for example include: XAI for complex decisions, e.g., on plans or policies instead of single output labels; conversational XAI that continuously interacts with users over time, aiming to understand and support them; contestable and collaborative XAI, which can successfully work with users in areas where neither user nor AI are omniscient or infallible; and flexible decision-making for XAI, able to adapt to users, respect their autonomy, and go beyond one-size-fits-all explanations. The aim of the seminar is to move towards a shared understanding of the field, and to develop a common roadmap for moving it forward.
- David Abel (Google DeepMind - London, GB)
- Hendrik Baier (TU Eindhoven, NL) [dblp]
- Ruth Mary Josephine Byrne (Trinity College Dublin, University of Dublin, IE) [dblp]
- Rebecca Eifler (LAAS - Toulouse, FR)
- Claudia Goldman (The Hebrew University of Jerusalem, IL) [dblp]
- Bradley Hayes (University of Colorado - Boulder, US) [dblp]
- Tobias Huber (TH Ingolstadt, DE)
- Mark T. Keane (University College Dublin, IE) [dblp]
- Khimya Khetarpal (Google DeepMind - Seattle, US) [dblp]
- Benjamin Krarup (King's College London, GB)
- Pat Langley (ISLE - Palo Alto, US) [dblp]
- Simon M. Lucas (Queen Mary University of London, GB) [dblp]
- Anna Lukina (TU Delft, NL) [dblp]
- Samer Nashed (University of Montreal, CA & MILA - Quebec AI Institute, CA)
- Sriraam Natarajan (University of Texas at Dallas - Richardson, US) [dblp]
- Ann Nowé (Free University of Brussels, BE) [dblp]
- Ron Petrick (Heriot-Watt University - Edinburgh, GB) [dblp]
- Mark Riedl (Georgia Institute of Technology - Atlanta, US) [dblp]
- Silvia Rossi (University of Naples, IT) [dblp]
- Wojciech Samek (Fraunhofer HHI - Berlin, DE) [dblp]
- Lindsay Sanneman (MIT - Cambridge, US) [dblp]
- Julian Siber (CISPA - Saarbrücken, DE)
- Sarath Sreedharan (Colorado State University - Fort Collins, US) [dblp]
- Mohan Sridharan (University of Edinburgh, GB) [dblp]
- Silvia Tulli (Sorbonne University - Paris, FR) [dblp]
- Stylianos Loukas Vasileiou (Washington University - St. Louis, US) [dblp]
- Abhinav Verma (Pennsylvania State University - University Park, US) [dblp]
Classification
- Artificial Intelligence
Keywords
- explainable artificial intelligence
- XAI
- sequential decision making