TOP
Suche auf der Schloss Dagstuhl Webseite
Sie suchen nach Informationen auf den Webseiten der einzelnen Seminare? - Dann:
Nicht fündig geworden? - Einige unserer Dienste laufen auf separaten Webseiten mit jeweils eigener Suche. Bitte beachten Sie folgende Liste:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminare
Innerhalb dieser Seite:
Externe Seiten:
  • DOOR (zum Registrieren eines Dagstuhl Aufenthaltes)
  • DOSA (zum Beantragen künftiger Dagstuhl Seminare oder Dagstuhl Perspektiven Workshops)
Publishing
Innerhalb dieser Seite:
Externe Seiten:
dblp
Innerhalb dieser Seite:
Externe Seiten:
  • die Informatik-Bibliographiedatenbank dblp


Dagstuhl-Seminar 23072

Challenges and Perspectives in Deep Generative Modeling

( 12. Feb – 17. Feb, 2023 )

(zum Vergrößern in der Bildmitte klicken)

Permalink
Bitte benutzen Sie folgende Kurz-Url zum Verlinken dieser Seite: https://www.dagstuhl.de/23072

Organisatoren

Kontakt



Programm

Summary

Premise

Since the inception of variational autoencoders, generative adversarial networks, normalizing flows, and diffusion models, the field of deep generative modeling has grown rapidly and consistently over the years. Especially in recent years, this has led to great advances in generating images, speech and text, as well as great promises in generating structured data such as 3D objects, videos, and molecules. However, we believe that current research has not sufficiently addressed several fundamental challenges related to evaluating and scaling these models, as well as interpreting their latent structure. These challenges have different manifestations in different applications. For example, while a variational autoencoder's sensitivity to changing data distributions can induce long code lengths and poor image reconstructions in neural compression, the same feature can be a positive attribute in detecting anomalies.

We believe that it is most beneficial to understand the challenges of deep generative models in their practical contexts. For this reason, we have invited a combination of researchers working on foundations of generative models and researchers working on specialized applications to this Dagstuhl Seminar. Thus, by integrating different communities, we have made a step towards identifying generalizable solutions across domains that spur innovation and new research.

As the main challenges of current deep generative modeling approaches we have identified the evaluation of generative models, performing scalable inference in such models, and improving the interpretability and robustness of the models' learned latent representations.

As example applications, we have considered three application areas that draw on generative modeling and that show various manifestations of the aforementioned challenges. Concretely, we consider applications in modeling scientific data, neural data compression, and out-of-distribution detection.

Structure of the seminar

We have created an open and inclusive atmosphere where participants from different communities could mingle and exchange ideas, leaving enough room for serendipitous encounters and ad-hoc discussions. We have catalyzed this process by inviting the participants to give short talks on either models (for the researchers) or problems (for the practitioners) as a basis for subsequent discussions. We then had panel discussions and round-tables regarding different topics that the participants could self-assign to, in order to match their common interests.

To promote interactions among researchers especially between those who may have not known each other, we have randomly paired researchers and practitioners into pairs and small groups and assigned them small tasks, such as coming up with a short abstract that would combine their interests. These types of activities have ultimately planted the seeds for different future collaborations and fostered a sense of togetherness among the participants.

Main observations from the talks

The content of the talks is covered in more detail in the sections of the full report, but we want to take the opportunity here to highlight recurring patterns and topics that emerged.

One main observation was that while large generative models, such as diffusion models or large language models, yield impressive performance and can solve many tasks that we would naïvely not have expected them to solve well (e.g., diffusion models sorting lists or solving sudokus and large language models performing logical reasoning), we lack a proper theoretical understanding of these models and can thus not guarantee their safety or reliability. This makes it particularly dangerous to use these models in critical applications, such as healthcare.

Moreover, many domains have specific requirements that are well-known to practitioners, but often ignored by machine learning researchers, e.g., non-iid data, safety constraints, prior knowledge, interpretability, or causal assumptions. While there are sub-fields of machine learning research studying these problems, most off-the-shelf methods do not readily provide solutions.

Finally, generative modeling holds great promise for areas such as neural compression or anomaly/out-of-distribution detection, but the practical improvements achieved by generative approaches in these domains remain limited. We will need more targeted collaborations between experts in generative modeling and these problem settings to make tangible real-world progress, some of which will have hopefully been sparked by this seminar.

Main takeaways from the working groups

Our working group sessions self-assembled spontaneously around key topics of interest that had emerged from the talks and informal discussions during the breaks. They focused on prior knowledge, continual learning, and anomaly detection.

Firstly, when it comes to domain knowledge, one working group tried to develop a categorization of different types and came up with physical constraints, symmetries, logic, ontologies, and factual knowledge. All of these require different approaches to incorporate them into generative models, so the developers of the model should be cognizant of the type of domain knowledge the practitioners might have. Moreover, eliciting the prior knowledge from the experts can be hard and cumbersome, and an elicitation strategy should be designed together with the model itself.

Secondly, continual learning is well-studied in the supervised setting, but less so in the unsupervised one. However, in the age of large generative models that are very expensive to train, continually expanding their generative abilities without having to retrain them from scratch becomes paramount. Since no explicit supervised objective function is available to measure the learning progress or potential forgetting, new solutions need to be developed to efficiently learn continually without catastrophic forgetting in the generative context.

Lastly, anomaly detection is a hard problem that has been studied in the statistical literature for decades, but novel powerful generative models harbor the promise of estimating quantities such as the compressibility or Kolmogorov complexity of data points, which might be used to more effectively detect outliers, out-of-distribution examples, and anomalous inputs.

Copyright Vincent Fortuin, Yingzhen Li, Stephan Mandt, and Kevin Murphy

Motivation

Deep generative models, such as variational autoencoders, generative adversarial networks, normalizing flows, energy-based models, and diffusion probabilistic models, have attracted much research interest and promise to impact diverse areas such as chemistry, art, robotics, and compression. However, compared to supervised learning frameworks, their impact on real-world applications has remained limited. What can we do as a research community to promote their widespread adaptation in the industry and the sciences? We believe that promoting generative modeling in practical contexts is hindered by several currently overlooked challenges. In this Dagstuhl Seminar, we aim to assess the state of the art in deep generative modeling in its practical context. We hope to thereby highlight challenges that might otherwise be ignored by the research community and showcase potentially impactful directions for future research.

We believe that some important challenges include:

  • Developing methods for assessing the quality of generated data
  • Enhancing the scope of current models and architectures, to include domain knowledge, constraints, etc.
  • Enhancing the scalability and speed of current methods of training, posterior inference, and generation
  • Improving the reproducibility and/or interpretability of learned latent representations, e.g., to satisfy legal, fairness, or technological standards

To ground these theoretical challenges in practical contexts, this seminar will focus on the following application areas:

  • Generative models for text, speech, images, and video.
  • Generative modeling of scientific data. Specifically, we will consider applications in physics simulation, molecular synthesis, bioinformatics, and medicine. Challenges include incorporating scientific domain knowledge, specific data structures, and data sparsity.
  • Neural data compression. While recent research has shown that neural video and image codecs show great potential to revolutionize current standards, many open problems remain, including out of distribution robustness, fast parallelism, evaluating perceptual quality, and standardization.
  • Anomaly and distribution shift detection. As models that learn the data distribution, deep generative models should be useful in detecting outlier samples or changes in the data distribution. Unfortunately, generative models are still ill-suited for these tasks and are inferior in performance compared to, say, self-supervised methods.

By aiming to bring together researchers working on both applied and theoretical aspects of generative modeling across application domains, we hope to identify commonly occurring problems and general-purpose solutions. Beyond the traditional talks, the workshop will be accompanied by social and group activities to foster exchange among participants.

Copyright Vincent Fortuin, Yingzhen Li, Stephan Mandt, and Kevin Murphy

Teilnehmer
  • Robert Bamler (Universität Tübingen, DE)
  • Ryan Cotterell (ETH Zürich, CH) [dblp]
  • Sina Däubener (Ruhr-Universität Bochum, DE)
  • Gerard de Melo (Hasso-Plattner-Institut, Universität Potsdam, DE) [dblp]
  • Sophie Fellenz (RPTU - Kaiserslautern, DE) [dblp]
  • Asja Fischer (Ruhr-Universität Bochum, DE) [dblp]
  • Vincent Fortuin (University of Cambridge, GB) [dblp]
  • Thomas Gärtner (Technische Universität Wien, AT) [dblp]
  • Matthias Kirchler (Hasso-Plattner-Institut, Universität Potsdam, DE) [dblp]
  • Marius Kloft (RPTU - Kaiserslautern, DE) [dblp]
  • Yingzhen Li (Imperial College London, GB)
  • Christoph Lippert (Hasso-Plattner-Institut, Universität Potsdam, DE) [dblp]
  • Stephan Mandt (University of California - Irvine, US) [dblp]
  • Laura Manduchi (ETH Zürich, CH)
  • Eric Nalisnick (University of Amsterdam, NL)
  • Björn Ommer (LMU München, DE)
  • Rajesh Ranganath (NYU Courant Institute of Mathematical Science, US)
  • Maja Rudolph (Bosch Center for AI - Pittsburgh, US) [dblp]
  • Alexander Rush (Cornell University - Ithaca, US) [dblp]
  • Lucas Theis (Google - London, GB)
  • Karen Ullrich (Meta - New York, US)
  • Jan-Willem van de Meent (University of Amsterdam, NL) [dblp]
  • Guy Van den Broeck (UCLA, US) [dblp]
  • Julia E. Vogt (ETH Zürich, CH)
  • Yixin Wang (University of Michigan - Ann Arbor, US)
  • Florian Wenzel (Amazon Web Services - Tübingen, DE) [dblp]
  • Frank Wood (University of British Columbia - Vancouver, CA) [dblp]

Klassifikation
  • Artificial Intelligence
  • Computer Vision and Pattern Recognition
  • Machine Learning

Schlagworte
  • deep generative models
  • machine learning for science
  • neural compression
  • out-of-distribution detection