Dagstuhl-Seminar 24192
Generalization by People and Machines
( 05. May – 08. May, 2024 )
Permalink
Organisatoren
- Barbara Hammer (Universität Bielefeld, DE)
- Filip Ilievski (VU Amsterdam, NL)
- Sascha Saralajew (NEC Laboratories Europe - Heidelberg, DE)
- Frank van Harmelen (VU Amsterdam, NL)
Kontakt
- Andreas Dolzmann (für wissenschaftliche Fragen)
- Susanne Bach-Bernhard (für administrative Fragen)
Gemeinsame Dokumente
- Dagstuhl Materials Page (Use personal credentials as created in DOOR to log in)
Programm
- Dagstuhl: A shining light in the Hochwald Black Forest - Alessandro Oltramari in CarnegieBosch Institute News, July 9, 2024
Today's AI systems are powerful to the extent that they have largely entered the mainstream and divided the world between those who believe AI will solve all our problems and those who fear that AI will be destructive for humanity. Meanwhile, trusting AI is very difficult given its lack of robustness to novel situations, consistency of its outputs, and interpretability of its reasoning process. Adversarial studies have demonstrated that current AI approaches for tasks like visual object detection and text classification are not as robust as hoped. Models struggle with connecting situations via higher-order similarities, performing commonsense reasoning, and their performance is largely correlated with training data frequencies. Together with informative signals, the models also pick on spurious correlations between terms and annotation biases, while being insensitive to subtle variations like negation. These findings inspired an arms race between the robustifaction of models and breaking their robustness. Building trustworthy AI requires a paradigm shift from the current oversimplified practice of crafting accuracy-driven models to a human-centric design that can enhance human ability on manageable tasks, or enable humans and AIs to solve complex tasks together that are difficult for either separately.
At the core of this problem is the unrivaled human generalization and abstraction ability. While today's AI is able to provide a response to any input, its ability to transfer knowledge to novel situations is still limited by oversimplification practices, as manifested by tasks that involve pragmatics, agent goals, and understanding of narrative structures. It is clear that some generalization is enabled by scaling up data or model complexity, but this idea is hitting a limit, surfacing the idea that something is missing. Recent work has addressed this gap to some extent by proposing modular architectures that involve generating rationales, tracking participant states in narratives, modeling user intent, and including planning objectives in language modeling. Meanwhile, cognitive mechanisms that drive generalization in people, like reasoning by analogy and deriving prototypes are popular in cognitive science research but have not gained mainstream adoption in machine learning techniques. As there are currently no venues that allow cross-disciplinary research on the topic of reliable AI generalization, this discrepancy is problematic and requires dedicated efforts to bring in one place generalization experts from different fields within AI, but also with Cognitive Science.
This Dagstuhl Seminar provides a unique opportunity for discussing the discrepancy between human and AI generalization mechanisms and crafting a vision on how to align the two streams in a compelling and promising way that combines the strengths of both. To ensure an effective seminar, we aim to bring together cross-disciplinary perspectives across computer and cognitive science fields. Our participants will include experts in Interpretable Machine Learning, Neuro-Symbolic Reasoning, Explainable AI, Commonsense Reasoning, Case-based Reasoning, Analogy, Cognitive Science, and Human-Computer Interaction. Specifically, the seminar will focus on the following questions: How can cognitive mechanisms in people be used to inspire generalization in AI? What Machine Learning methods hold the promise to enable such reasoning mechanisms? What is the role of data and knowledge engineering for AI and human generalization? How can we design and model human-AI teams that can benefit from their complementary generalization capabilities? How can we evaluate generalization in humans and AI in a satisfactory manner?
- Wael Abd-Almageed (Clemson University, US)
- Michael Biehl (University of Groningen, NL) [dblp]
- Marianna Marcella Bolognesi (University of Bologna, IT)
- Xin Luna Dong (Meta Reality Labs - Bellevue, US) [dblp]
- Kenneth D. Forbus (Northwestern University - Evanston, US)
- Kiril Gashteovski (NEC Laboratories Europe - Heidelberg, DE)
- Barbara Hammer (Universität Bielefeld, DE) [dblp]
- Pascal Hitzler (Kansas State University - Manhattan, US) [dblp]
- Filip Ilievski (VU Amsterdam, NL)
- Giuseppe Marra (KU Leuven, BE)
- Pasquale Minervini (University of Edinburgh, GB)
- Martin Mundt (TU Darmstadt, DE) [dblp]
- Axel-Cyrille Ngonga Ngomo (Universität Paderborn, DE) [dblp]
- Alessandro Oltramari (Carnegie Bosch Institute - Pittsburgh, US)
- Benjamin Paaßen (Universität Bielefeld, DE)
- Gabriella Pasi (University of Milan, IT)
- Sascha Saralajew (NEC Laboratories Europe - Heidelberg, DE)
- Zeynep G. Saribatur (TU Wien, AT) [dblp]
- Ute Schmid (Universität Bamberg, DE) [dblp]
- Luciano Serafini (Bruno Kessler Foundation - Trento, IT) [dblp]
- Dafna Shahaf (The Hebrew University of Jerusalem, IL) [dblp]
- John Shawe-Taylor (University College London, GB) [dblp]
- Vered Shwartz (University of British Columbia - Vancouver, CA)
- Gabriella Skitalinska (Leibniz Universität Hannover, DE)
- Clemens Stachl (Universität St. Gallen, CH)
- Gido van de Ven (KU Leuven, BE) [dblp]
- Frank van Harmelen (VU Amsterdam, NL) [dblp]
- Thomas Villmann (Hochschule Mittweida, DE) [dblp]
- Piek Vossen (VU Amsterdam, NL)
- Michael R. Waldmann (Universität Göttingen, DE)
Klassifikation
- Artificial Intelligence
- Machine Learning
- Symbolic Computation
Schlagworte
- Interpretable Machine Learning
- Human-AI Collaboration
- Cognitive Science
- Neuro-Symbolic Reasoning
- Explainability