Dagstuhl Seminar 25061
Logic and Neural Networks
( Feb 02 – Feb 07, 2025 )
Permalink
Organizers
- Vaishak Belle (University of Edinburgh, GB)
- Michael Benedikt (University of Oxford, GB)
- Dana Drachsler Cohen (Technion - Haifa, IL)
- Daniel Neider (TU Dortmund, DE)
Contact
- Marsha Kleinbauer (for scientific matters)
- Jutka Gasiorowski (for administrative matters)
Logic and learning are central to Computer Science, and in particular to AI-related research. Already Alan Turing envisioned in his 1950 “Computing Machinery and Intelligence” paper a combination of statistical (ab initio) machine learning and an “unemotional” symbolic language such as logic. The combination of logic and learning has received new impetus from the spectacular success of deep learning systems.
The goal of this Dagstuhl Seminar is to bring together researchers from various communities related to utilizing logical constraints in deep learning and to create bridges between them via the exchange of ideas.
The seminar will focus on a set of interrelated topics:
Enforcement of constraints on neural networks. Looking at methods to train neural networks to enforce formally stated requirements. This is a crucial aspect of AI safety.
Verifying logical constraints on neural networks. Leveraging logic to formalize safety properties, and developing methods to prove that a network satisfies them. Examples of safety properties include local and global robustness to adversarial example attacks as well as various fairness properties.
Training using logic to supplement traditional supervision. Augmenting supervision via the use of external knowledge, which is crucial for settings where explicit supervision is extremely limited and synthetic data generation is infeasible. For example, this approach has been utilized in scene recognition and parsing.
Explanation and approximation via logic. Leveraging logical languages to explain a model’s behavior. This is crucial for black box models, for various network types, including Graph Neural Networks. This topic is closely related to characterizations of the expressiveness of neural models in terms of logic and descriptive complexity, a very active area of recent research.
This Dagstuhl Seminar aims not at studying these areas as separate components, but in exploring common techniques among them as well as connections to other communities in machine learning that share the same broad goals. For example, in looking at training using rules, we will investigate links with other weakly-supervised learning approaches, such as training based on physical conservation laws. In looking at verification and enforcement, we will look for contact with those working more broadly on AI safety.
The high-level results expected to be produced include the following:
- Fostering links among researchers working on the application of learning to logical artifacts. This will include creating an understanding of the work in different applications, as well as an increased understanding of the formal connections between these applications.
- Generating a set of goals, challenges, and research directions for the application of logic to neural networks.
- Providing a more unified view of the current approaches on the interaction between neural networks and logic and identify gaps in the existing formalisms that attempt this synthesis.
- Creating bridges between researchers in computational logic and those in machine learning and identify ways in which enhanced interaction between the communities can continue.
- Serving as a catalyst for the further development of benchmarks related to the use of logic in neural networks.
Classification
- Logic in Computer Science
- Machine Learning
Keywords
- machine learning
- learning theory
- logic
- verification
- safety