TOP
Search the Dagstuhl Website
Looking for information on the websites of the individual seminars? - Then please:
Not found what you are looking for? - Some of our services have separate websites, each with its own search option. Please check the following list:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminars
Within this website:
External resources:
  • DOOR (for registering your stay at Dagstuhl)
  • DOSA (for proposing future Dagstuhl Seminars or Dagstuhl Perspectives Workshops)
Publishing
Within this website:
External resources:
dblp
Within this website:
External resources:
  • the dblp Computer Science Bibliography


Dagstuhl Seminar 24372

Explainable AI for Sequential Decision Making

( Sep 08 – Sep 11, 2024 )

(Click in the middle of the image to enlarge)

Permalink
Please use the following short url to reference this page: https://www.dagstuhl.de/24372

Organizers

Contact

Shared Documents

Schedule

Press Room

Motivation

As we work with AI and rely on AI for more and more decisions that influence our lives, the research area of explainable AI (XAI) has rapidly developed, with goals such as increasing trust, enhancing collaboration, and enabling transparency in AI. However, to date, the focus of XAI has largely been on explaining the input-output mappings of “black box” models like neural networks, which have been seen as the central problem for the explainability of AI systems. While these models are certainly important, intelligent behavior often extends over time and needs to be explained and understood as such. The challenge of explaining sequential decision-making (SDM), such as that of robots collaborating with humans or software agents engaged in complex ongoing tasks, has only recently gained attention. We may have AIs that can beat us in Go, but can they teach us how to play? We may have search and rescue robots, but can we effectively communicate with them, and coordinate missions with them, in the field?

Initial attempts to make the behavior of SDM algorithms more understandable have recently appeared in different fields such as classical AI planning, reinforcement learning, multiagent systems, or logic-based argumentation – but often focused on ad hoc solutions to specific problems, with an emphasis on area-specific terminology and concepts developed in isolation of other fields. Many of these approaches are also restricted in their scope, for example to explanations of isolated, single actions that do not address the full complexity of SDM; or to summaries of entire agent policies, which are often too high-level to be helpful. To truly trust an AI agent and collaboratively work with it towards human goals, and to increase successful AI adoption and acceptance in many fields from robotics to logistics, and from production planning to smart cities, we need considerable progress on this new field of XAI for SDM (or X-SDM), which we will focus on developing in this Dagstuhl Seminar.

The seminar will focus on under-researched challenges that are unique to, or of particular relevance to, explainability in sequential decision-making settings. We will seek to identify and clarify such challenges, making use of the complementary perspectives of researchers from different communities such as reinforcement learning, planning, recommender systems, or multi-agent systems, which historically use different theoretical foundations and computational approaches. While the participants will form working groups based on their own research interests and priorities, topics to be discussed can for example include: XAI for complex decisions, e.g., on plans or policies instead of single output labels; conversational XAI that continuously interacts with users over time, aiming to understand and support them; contestable and collaborative XAI, which can successfully work with users in areas where neither user nor AI are omniscient or infallible; and flexible decision-making for XAI, able to adapt to users, respect their autonomy, and go beyond one-size-fits-all explanations. The aim of the seminar is to move towards a shared understanding of the field, and to develop a common roadmap for moving it forward.

Copyright Hendrik Baier, Mark T. Keane, Sarath Sreedharan, Silvia Tulli, and Abhinav Verma

Participants

Please log in to DOOR to see more details.

  • David Abel (Google DeepMind - London, GB)
  • Hendrik Baier (TU Eindhoven, NL) [dblp]
  • Ruth Mary Josephine Byrne (Trinity College Dublin, University of Dublin, IE) [dblp]
  • Rebecca Eifler (LAAS - Toulouse, FR)
  • Claudia Goldman (The Hebrew University of Jerusalem, IL) [dblp]
  • Bradley Hayes (University of Colorado - Boulder, US) [dblp]
  • Tobias Huber (TH Ingolstadt, DE)
  • Mark T. Keane (University College Dublin, IE) [dblp]
  • Khimya Khetarpal (Google DeepMind - Seattle, US) [dblp]
  • Benjamin Krarup (King's College London, GB)
  • Pat Langley (ISLE - Palo Alto, US) [dblp]
  • Simon M. Lucas (Queen Mary University of London, GB) [dblp]
  • Anna Lukina (TU Delft, NL) [dblp]
  • Samer Nashed (University of Montreal, CA & MILA - Quebec AI Institute, CA)
  • Sriraam Natarajan (University of Texas at Dallas - Richardson, US) [dblp]
  • Ann Nowé (Free University of Brussels, BE) [dblp]
  • Ron Petrick (Heriot-Watt University - Edinburgh, GB) [dblp]
  • Mark Riedl (Georgia Institute of Technology - Atlanta, US) [dblp]
  • Silvia Rossi (University of Naples, IT) [dblp]
  • Wojciech Samek (Fraunhofer HHI - Berlin, DE) [dblp]
  • Lindsay Sanneman (MIT - Cambridge, US) [dblp]
  • Julian Siber (CISPA - Saarbrücken, DE)
  • Sarath Sreedharan (Colorado State University - Fort Collins, US) [dblp]
  • Mohan Sridharan (University of Edinburgh, GB) [dblp]
  • Silvia Tulli (Sorbonne University - Paris, FR) [dblp]
  • Stylianos Loukas Vasileiou (Washington University - St. Louis, US) [dblp]
  • Abhinav Verma (Pennsylvania State University - University Park, US) [dblp]

Classification
  • Artificial Intelligence

Keywords
  • explainable artificial intelligence
  • XAI
  • sequential decision making