TOP
Search the Dagstuhl Website
Looking for information on the websites of the individual seminars? - Then please:
Not found what you are looking for? - Some of our services have separate websites, each with its own search option. Please check the following list:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminars
Within this website:
External resources:
  • DOOR (for registering your stay at Dagstuhl)
  • DOSA (for proposing future Dagstuhl Seminars or Dagstuhl Perspectives Workshops)
Publishing
Within this website:
External resources:
dblp
Within this website:
External resources:
  • the dblp Computer Science Bibliography


Dagstuhl Seminar 24461

Rethinking the Role of Bayesianism in the Age of Modern AI

( Nov 10 – Nov 15, 2024 )

(Click in the middle of the image to enlarge)

Permalink
Please use the following short url to reference this page: https://www.dagstuhl.de/24461

Organizers

Contact

Dagstuhl Reports

As part of the mandatory documentation, participants are asked to submit their talk abstracts, working group results, etc. for publication in our series Dagstuhl Reports via the Dagstuhl Reports Submission System.

  • Upload (Use personal credentials as created in DOOR to log in)

Dagstuhl Seminar Wiki

Shared Documents

Schedule

Motivation

Despite the recent success of large-scale deep learning, these systems still fall short in terms of their reliability and trustworthiness. They often lack the ability to estimate their own uncertainty in a calibrated way, encode meaningful prior knowledge, avoid catastrophic failures, and also reason about their environments to avoid such failures. Since its inception, Bayesian deep learning (BDL) has harbored the promise of achieving these desiderata by combining the solid statistical foundations of Bayesian inference with the practically successful engineering solutions of deep learning methods. This was intended to provide a principled mechanism to add the benefits of Bayesian learning to the framework of deep neural networks.

However, compared to its promise, BDL methods often do not live up to the expectation and underdeliver in terms of real-world impact. This is due to many fundamental challenges related to, for instance, computation of approximate posteriors, unavailability of flexible priors, but also lack of appropriate testbeds and benchmarks. To make things worse, there are also numerous misconceptions about the scope of Bayesian methods, and researchers often end up expecting more than what they can get out of Bayes. They can also ignore other simpler and cheaper non-Bayesian alternatives such as the bootstrap method, post-hoc uncertainty scaling, and conformal prediction. Such overexpectation followed by an underdelivery can lead researchers to lose faith in the Bayesian ways, something we ourselves have witnessed in the past.

So, what exactly is the role of Bayes in this modern day and age of AI where many of the original promises of Bayes are being (or at least seem to be) unlocked simply by scaling? Non-Bayesian approaches appear to solve many problems that Bayesians once dreamt of solving using Bayesian methods. We thus believe that it is timely and important to rethink and redefine the promises and challenges of Bayesian approaches; and also to elucidate which Bayesian methods might prevail against their non-Bayesian competitors; and finally identify key application areas where Bayes can shine.

By bringing together researchers from diverse communities, such as machine learning, statistics, and deep learning practice, in a personal and interactive seminar environment featuring debates, round tables, and brainstorming sessions, we hope to discuss and answer these questions from a variety of angles and chart a path for future research to innovate, enhance, and strengthen meaningful real-world impact of Bayesian deep learning.

Copyright Vincent Fortuin, Zoubin Ghahramani, Mohammad Emtiyaz Khan, and Mark van der Wilk

Participants

Classification
  • Artificial Intelligence
  • Machine Learning

Keywords
  • Bayesian machine learning
  • Deep learning
  • Foundation models
  • Uncertainty estimation
  • Model selection