TOP
Suche auf der Schloss Dagstuhl Webseite
Sie suchen nach Informationen auf den Webseiten der einzelnen Seminare? - Dann:
Nicht fündig geworden? - Einige unserer Dienste laufen auf separaten Webseiten mit jeweils eigener Suche. Bitte beachten Sie folgende Liste:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminare
Innerhalb dieser Seite:
Externe Seiten:
  • DOOR (zum Registrieren eines Dagstuhl Aufenthaltes)
  • DOSA (zum Beantragen künftiger Dagstuhl Seminare oder Dagstuhl Perspektiven Workshops)
Publishing
Innerhalb dieser Seite:
Externe Seiten:
dblp
Innerhalb dieser Seite:
Externe Seiten:
  • die Informatik-Bibliographiedatenbank dblp


Dagstuhl-Seminar 25202

Generative Models for 3D Vision

( 11. May – 16. May, 2025 )

Permalink
Bitte benutzen Sie folgende Kurz-Url zum Verlinken dieser Seite: https://www.dagstuhl.de/25202

Organisatoren

Kontakt

Motivation

The rise of purely data-driven generative models, in particular generative adversarial networks, auto-regressive models, neural fields and diffusion models, has led to a step change in image synthesis quality. It is now possible to create photorealistic images with high level semantic control and solve many desirable use cases such as 2D inpainting. Whilst prior models were object specific (e.g. 3D Morphable Models of Faces), we now have generative models for images and videos that can represent various object classes and generate a huge variety of objects and scenes, even in different styles. The drawback of purely data-driven approaches is that the control and explainability provided by 3D and physically-based parameters is lost. It is also difficult (and perhaps prohibitively inefficient) to learn 3D consistent representations without prior models purely from 2D data alone.

Very recently, the community has begun to explore how to combine these two philosophies. 3D computer vision tasks can benefit from the visual prior provided by generative image models. Generative models can learn powerful image priors with some notion of view-point consistency from solely 2D data and then be used to synthesize training data for 3D vision models. Physically-based priors from 3D vision can be used to guide generative image models as a strong explicit inductive prior towards more data-efficient and accurate visual representations of the world. On the other hand, modern generative models rely on huge training datasets and compute resources that, increasingly, are only available to large industrial research labs.

This Dagstuhl Seminar seeks to bring together communities of researchers from computer graphics, computer vision and machine learning in both industry and academia at this extremely timely moment in the progress of the field.

Copyright Bernhard Egger, Adam Kortylewski, William Smith, and Stefanie Wuhrer

Verwandte Seminare
  • Dagstuhl-Seminar 19102: 3D Morphable Models (2019-03-03 - 2019-03-08) (Details)
  • Dagstuhl-Seminar 22121: 3D Morphable Models and Beyond (2022-03-20 - 2022-03-25) (Details)

Klassifikation
  • Computer Vision and Pattern Recognition
  • Graphics
  • Machine Learning

Schlagworte
  • Generative Models
  • Implicit Representation
  • Diffusion Models
  • Neural Rendering
  • Inverse Rendering