Dagstuhl Seminar 26092
Conveying the Essence via Abstraction: From Art to AI
( Feb 22 – Feb 27, 2026 )
Permalink
Organizers
- Claus-Christian Carbon (Universität Bamberg, DE)
- Pascal Hitzler (Kansas State University - Manhattan, US)
- Zeynep G. Saribatur (TU Wien, AT)
Contact
- Andreas Dolzmann (for scientific matters)
- Susanne Bach-Bernhard (for administrative matters)
Abstraction through simplifying and generalizing is an ability that humans unwittingly use when reasoning about the world and understanding, as it aids in capturing the essence of the situation. Developing AI systems with such abilities has been an intriguing challenge for decades. Throughout the years, different abstraction theories and methods have been investigated that change the representation while adhering to certain principles of simplification and/or generalization. Overall, how and whether those that are considered “good” abstractions by AI researchers match the humans’ ability still remains an open question. Especially as we aim for AI systems that are transparent and understandable to humans, such systems need to acquire abstraction abilities that allow them to present explanations of their complex decision-making and representations, so-called “model of self”, overviewing their complex structures via showing the key elements making it easier for humans to understand. Recently the field of Explainable AI (XAI) has seen promising works involving abstractions which shows the need and the potential of having AI systems with better abstraction abilities. In order to fill the gap between the existing abstraction theories and methods in AI and what humans can do, further input is needed from social scientists on their investigations of human reasoning and the ability to abstract.
In this Dagstuhl Seminar, we will explore the particularly important human domain of visual art, in order to obtain further insights into the cognitive ability of abstraction. Art is a culturally old and worldwide established means for humans to express their thoughts, emotions and views about the world, often as an expression of non-expressible or verbalizable information, episodes or experiences. We argue that understanding human reasoning, especially the ability to abstract, art is a valuable and rich candidate to be systematically analyzed. The seminar offers a unique opportunity to gather computer scientists, psychologists, cognitive scientists, artists, and art historians to explore the art domain toward understanding the cognitive tools needed for building AI systems with better abstraction abilities to be used for more understandable representations.
The seminar will focus on how AI and art conceptualize abstraction by identifying the similarities and the differences, the role of abstraction in understandability, and how it can contribute to XAI. It will contain plenary discussions and group work, with invited talks on AI and art perspectives on abstraction, human reasoning and understanding, and XAI. The seminar will also contain “Lightning talks” allowing for participants to present related work, and a “Demo session” allowing for the presentation of empirical studies and demonstrations involving art and abstraction. Ample time will be provided for discussions within specific topic groups with participants stemming from a diverse background in order to explore the potential of looking at art towards capturing the human cognitive ability of abstraction in AI.
The general idea of abstraction, which is “distilling the essential”, seems to be similar in AI and art, which indicates a relation between these domains, but which has to be investigated in more detail whether this holds really true when comparing the typical outcomes from both domains and their receptions. The Dagstuhl Seminar aims to pinpoint this idea in such different disciplines in the context of human understanding to aid in the challenge of XAI.

Classification
- Artificial Intelligence
- Human-Computer Interaction
- Symbolic Computation
Keywords
- abstraction
- visual art
- artificial intelligence
- explainability