Research Meeting 24463
Explainable Decision-Making
( Nov 13 – Nov 15, 2024 )
Permalink
Organizer
- Wolfgang Maaß (UdS - Saarbrücken, DE & DFKI - Saarbrücken, DE)
Contact
- Heike Clemens (for administrative matters)
Explainable decision-making refers to methods and practices that allow humans to understand and trust decisions made by systems that use artificial intelligence (AI) and machine learning (ML) components. The topic gained prominence as AI and ML models have become more complex, making their decision-making processes less transparent and harder to interpret. From a computer science perspective, explainable decision-making encompasses several key aspects: (1) transparency for validating the model's correctness and ensuring it operates as intended; (2) interpretability, i.e., the degree to which a human can understand the cause of a decision made by an AI system; (3) explainability by providing understandable reasons for decisions to end-users in a manner that is meaningful to them; and (4) fairness and bias evaluation for ensuring that the AI systems operate fairly across different groups of individuals.
Within the research meeting researchers will present research work and discuss methods on the topic of explainable decision-making in AI, with a special focus on applications in health AI (including genomics), healthcare services, resilience management, and sustainability assessment. The meeting aims to foster a comprehensive understanding, share the latest research findings, and envision future directions in these areas.