TOP
Suche auf der Schloss Dagstuhl Webseite
Sie suchen nach Informationen auf den Webseiten der einzelnen Seminare? - Dann:
Nicht fündig geworden? - Einige unserer Dienste laufen auf separaten Webseiten mit jeweils eigener Suche. Bitte beachten Sie folgende Liste:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminare
Innerhalb dieser Seite:
Externe Seiten:
  • DOOR (zum Registrieren eines Dagstuhl Aufenthaltes)
  • DOSA (zum Beantragen künftiger Dagstuhl Seminare oder Dagstuhl Perspektiven Workshops)
Publishing
Innerhalb dieser Seite:
Externe Seiten:
dblp
Innerhalb dieser Seite:
Externe Seiten:
  • die Informatik-Bibliographiedatenbank dblp


Dagstuhl-Seminar 24461

Rethinking the Role of Bayesianism in the Age of Modern AI

( 10. Nov – 15. Nov, 2024 )

(zum Vergrößern in der Bildmitte klicken)

Permalink
Bitte benutzen Sie folgende Kurz-Url zum Verlinken dieser Seite: https://www.dagstuhl.de/24461

Organisatoren

Kontakt

Dagstuhl Reports

As part of the mandatory documentation, participants are asked to submit their talk abstracts, working group results, etc. for publication in our series Dagstuhl Reports via the Dagstuhl Reports Submission System.

  • Upload (Use personal credentials as created in DOOR to log in)

Dagstuhl Seminar Wiki

Gemeinsame Dokumente

Programm

Motivation

Despite the recent success of large-scale deep learning, these systems still fall short in terms of their reliability and trustworthiness. They often lack the ability to estimate their own uncertainty in a calibrated way, encode meaningful prior knowledge, avoid catastrophic failures, and also reason about their environments to avoid such failures. Since its inception, Bayesian deep learning (BDL) has harbored the promise of achieving these desiderata by combining the solid statistical foundations of Bayesian inference with the practically successful engineering solutions of deep learning methods. This was intended to provide a principled mechanism to add the benefits of Bayesian learning to the framework of deep neural networks.

However, compared to its promise, BDL methods often do not live up to the expectation and underdeliver in terms of real-world impact. This is due to many fundamental challenges related to, for instance, computation of approximate posteriors, unavailability of flexible priors, but also lack of appropriate testbeds and benchmarks. To make things worse, there are also numerous misconceptions about the scope of Bayesian methods, and researchers often end up expecting more than what they can get out of Bayes. They can also ignore other simpler and cheaper non-Bayesian alternatives such as the bootstrap method, post-hoc uncertainty scaling, and conformal prediction. Such overexpectation followed by an underdelivery can lead researchers to lose faith in the Bayesian ways, something we ourselves have witnessed in the past.

So, what exactly is the role of Bayes in this modern day and age of AI where many of the original promises of Bayes are being (or at least seem to be) unlocked simply by scaling? Non-Bayesian approaches appear to solve many problems that Bayesians once dreamt of solving using Bayesian methods. We thus believe that it is timely and important to rethink and redefine the promises and challenges of Bayesian approaches; and also to elucidate which Bayesian methods might prevail against their non-Bayesian competitors; and finally identify key application areas where Bayes can shine.

By bringing together researchers from diverse communities, such as machine learning, statistics, and deep learning practice, in a personal and interactive seminar environment featuring debates, round tables, and brainstorming sessions, we hope to discuss and answer these questions from a variety of angles and chart a path for future research to innovate, enhance, and strengthen meaningful real-world impact of Bayesian deep learning.

Copyright Vincent Fortuin, Zoubin Ghahramani, Mohammad Emtiyaz Khan, and Mark van der Wilk

Teilnehmer

Klassifikation
  • Artificial Intelligence
  • Machine Learning

Schlagworte
  • Bayesian machine learning
  • Deep learning
  • Foundation models
  • Uncertainty estimation
  • Model selection