TOP
Search the Dagstuhl Website
Looking for information on the websites of the individual seminars? - Then please:
Not found what you are looking for? - Some of our services have separate websites, each with its own search option. Please check the following list:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminars
Within this website:
External resources:
  • DOOR (for registering your stay at Dagstuhl)
  • DOSA (for proposing future Dagstuhl Seminars or Dagstuhl Perspectives Workshops)
Publishing
Within this website:
External resources:
dblp
Within this website:
External resources:
  • the dblp Computer Science Bibliography


Dagstuhl Seminar 25142

Explainability in Focus: Advancing Evaluation through Reusable Experiment Design

( Mar 30 – Apr 02, 2025 )

Permalink
Please use the following short url to reference this page: https://www.dagstuhl.de/25142

Organizers

Contact

Motivation

Explainability in artificial intelligence (AI) is paramount for ensuring the responsible and ethical deployment of these technologies across various domains. It plays a crucial role in establishing trust between AI systems and humans, particularly in applications that directly impact individuals’ lives, such as healthcare, finance, and criminal justice. The ability to provide clear, comprehensible explanations of AI-driven decisions helps demystify these complex systems, allowing users to understand the rationale behind outcomes. This transparency promotes accountability, enabling users to verify that AI systems are making decisions based on valid and unbiased data, ultimately reinforcing trust and confidence in the technology.

Yet, a crucial aspect that tends to be overlooked is the insight that explanations can be leveraged for different objectives and when evaluating the utility of these methods the objective needs to be taken into account. Explanations can enhance transparency, help users form a cognitive model of a trained ML system, aid in debugging, or assist users in determining whether to place trust in a prediction or recommendation. While many explanatory mechanisms have been proposed in the community, comparing these solutions remains challenging without more standardized evaluation strategies. Compounding this issue is the versatile nature of explanations, meaning algorithm designers in reality should tailor their evaluation strategies to specific tasks.

The objective of this workshop is to bring together researchers, practitioners, and experts in the field of explainable AI to collaboratively develop reusable experiment designs. Evaluating explainability methods is something that has not been standardized by the community, meaning each author must develop and justify their own approaches. This reduces the abilities of researchers to publish their findings, which can also hinder progress in this space.

To address this, we aim to identify the different objectives and tasks for explainability methods and use this Dagstuhl Seminar to bring together researchers from the field to create a repository of adaptable tasks and experiments that will be made available to the community as open-source resources. By fostering discussions, sharing insights, and creating practical frameworks, this seminar aims to accelerate progress in the field of explainability, ensuring that evaluation practices are robust, consistent, and applicable across a wide range of contexts and applications.

Our goal is to fill this gap in the community, lowering the barrier for entry for AI researchers to properly evaluate their contributions with sound evaluation strategies grounded in cognitive science.

Copyright Ruth Mary Josephine Byrne, Elizabeth M. Daly, Simone Stumpf, and Stefano Teso

Classification
  • Artificial Intelligence
  • Human-Computer Interaction
  • Machine Learning

Keywords
  • Explainability
  • Mental Models
  • interactive machine learning
  • Experiment Design
  • Human-centered AI