TOP
Search the Dagstuhl Website
Looking for information on the websites of the individual seminars? - Then please:
Not found what you are looking for? - Some of our services have separate websites, each with its own search option. Please check the following list:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminars
Within this website:
External resources:
  • DOOR (for registering your stay at Dagstuhl)
  • DOSA (for proposing future Dagstuhl Seminars or Dagstuhl Perspectives Workshops)
Publishing
Within this website:
External resources:
dblp
Within this website:
External resources:
  • the dblp Computer Science Bibliography


Dagstuhl Seminar 25061

Logic and Neural Networks

( Feb 02 – Feb 07, 2025 )

Permalink
Please use the following short url to reference this page: https://www.dagstuhl.de/25061

Organizers

Contact

Motivation

Logic and learning are central to Computer Science, and in particular to AI-related research. Already Alan Turing envisioned in his 1950 “Computing Machinery and Intelligence” paper a combination of statistical (ab initio) machine learning and an “unemotional” symbolic language such as logic. The combination of logic and learning has received new impetus from the spectacular success of deep learning systems.

The goal of this Dagstuhl Seminar is to bring together researchers from various communities related to utilizing logical constraints in deep learning and to create bridges between them via the exchange of ideas.

The seminar will focus on a set of interrelated topics:
Enforcement of constraints on neural networks. Looking at methods to train neural networks to enforce formally stated requirements. This is a crucial aspect of AI safety.

Verifying logical constraints on neural networks. Leveraging logic to formalize safety properties, and developing methods to prove that a network satisfies them. Examples of safety properties include local and global robustness to adversarial example attacks as well as various fairness properties.

Training using logic to supplement traditional supervision. Augmenting supervision via the use of external knowledge, which is crucial for settings where explicit supervision is extremely limited and synthetic data generation is infeasible. For example, this approach has been utilized in scene recognition and parsing.

Explanation and approximation via logic. Leveraging logical languages to explain a model’s behavior. This is crucial for black box models, for various network types, including Graph Neural Networks. This topic is closely related to characterizations of the expressiveness of neural models in terms of logic and descriptive complexity, a very active area of recent research.

This Dagstuhl Seminar aims not at studying these areas as separate components, but in exploring common techniques among them as well as connections to other communities in machine learning that share the same broad goals. For example, in looking at training using rules, we will investigate links with other weakly-supervised learning approaches, such as training based on physical conservation laws. In looking at verification and enforcement, we will look for contact with those working more broadly on AI safety.

The high-level results expected to be produced include the following:

  • Fostering links among researchers working on the application of learning to logical artifacts. This will include creating an understanding of the work in different applications, as well as an increased understanding of the formal connections between these applications.
  • Generating a set of goals, challenges, and research directions for the application of logic to neural networks.
  • Providing a more unified view of the current approaches on the interaction between neural networks and logic and identify gaps in the existing formalisms that attempt this synthesis.
  • Creating bridges between researchers in computational logic and those in machine learning and identify ways in which enhanced interaction between the communities can continue.
  • Serving as a catalyst for the further development of benchmarks related to the use of logic in neural networks.
Copyright Vaishak Belle, Michael Benedikt, Dana Drachsler Cohen, and Daniel Neider

Classification
  • Logic in Computer Science
  • Machine Learning

Keywords
  • machine learning
  • learning theory
  • logic
  • verification
  • safety