Dagstuhl Seminar 25291
(Actual) Neurosymbolic AI: Combining Deep Learning and Knowledge Graphs
( Jul 13 – Jul 18, 2025 )
Permalink
Organizers
- Pascal Hitzler (Kansas State University - Manhattan, US)
- Cogan Matthew Shimizu (Wright State University - Dayton, US)
- Daria Stepanova (Bosch Center for AI - Renningen, DE)
- Frank van Harmelen (VU Amsterdam, NL)
Contact
- Marsha Kleinbauer (for scientific matters)
- Simone Schilke (for administrative matters)
In the past decade, both deep learning (DL) and knowledge graphs (KGs) have seen astonishing growth and groundbreaking milestones – DL due to newly available resources (e.g., accessibility of (modern) web scale data), previously un-scalable techniques (e.g., transformers), and modern hardware; KGs due to successful standardization, web-scale integration, and previously un-scalable techniques for querying and inference. This has brought new and increased interest to both fields, and especially in how they can complement each other.
DL systems have been successfully applied in a massive collection of use-cases. Especially prominent in the zeitgeist, are large language models (LLMs), many of which are based on transformers, and are increasingly accessible and diverse. Notably, LLMs are not great at distinguishing fact from fiction, generally due to the nature of their construction, but also through the quality or type of data used in training. One way to overcome this problem is through structured, human-curated knowledge, and a prominent form of this are KGs, which are widely used as a platform for knowledge management and symbolic data representation. They have quickly become a major paradigm for the creation, extraction, integration, representation, and visualization of data, based on long-established W3C standards and recommendations. This is especially the case when an ontology is used as schema. Yet, the construction of KGs and the development of a high-quality ontology can be very effort intensive. Furthermore, symbolic representations are brittle, in contrast to the DL models, which can learn from (possibly) noisy data manner.
This Dagstuhl Seminar will focus on bridging the gap between deep learning and knowledge graphs, and will discuss their integration: neurosymbolic AI. The aim is to advance the understanding of how symbolic knowledge - in the form of KGs - can be used to improve the capabilities of DL systems, and how DL can advance KG construction and applications. Our participants will include experts and emerging researchers deeply embedded in both fields, which span the (non-exhaustive) lines of investigation:
- Grounding of DL models in facts sourced from symbolic models (e.g., KGs).
- Utilization of symbolic knowledge to improve the explainability, robustness, interpretability, generalization, and transferability of DL models and their behaviors.
- Fusion of strategies where a neural or symbolic approach excels, but the other does not.
- Incorporating common-sense and domain knowledge into systems for improved causal analysis.
- The usage of DL models to extract symbolic knowledge from data.
- Vectorization of data with symbolic knowledge and the vectorization of knowledge with data.
Additionally, time will be set aside to tackle blue sky ideas, such as the integration of biologically inspired DL systems as they translate to the human-level learning, retention, and execution of procedural knowledge.
Classification
- Artificial Intelligence
- Logic in Computer Science
- Machine Learning
Keywords
- neurosymbolic ai
- knowledge graphs
- deep learning