TOP
Search the Dagstuhl Website
Looking for information on the websites of the individual seminars? - Then please:
Not found what you are looking for? - Some of our services have separate websites, each with its own search option. Please check the following list:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminars
Within this website:
External resources:
  • DOOR (for registering your stay at Dagstuhl)
  • DOSA (for proposing future Dagstuhl Seminars or Dagstuhl Perspectives Workshops)
Publishing
Within this website:
External resources:
dblp
Within this website:
External resources:
  • the dblp Computer Science Bibliography


Dagstuhl Seminar 25202

Generative Models for 3D Vision

( May 11 – May 16, 2025 )

Permalink
Please use the following short url to reference this page: https://www.dagstuhl.de/25202

Organizers

Contact

Dagstuhl Seminar Wiki

Shared Documents

Schedule
  • Upload (Use personal credentials as created in DOOR to log in)

Motivation

The rise of purely data-driven generative models, in particular generative adversarial networks, auto-regressive models, neural fields and diffusion models, has led to a step change in image synthesis quality. It is now possible to create photorealistic images with high level semantic control and solve many desirable use cases such as 2D inpainting. Whilst prior models were object specific (e.g. 3D Morphable Models of Faces), we now have generative models for images and videos that can represent various object classes and generate a huge variety of objects and scenes, even in different styles. The drawback of purely data-driven approaches is that the control and explainability provided by 3D and physically-based parameters is lost. It is also difficult (and perhaps prohibitively inefficient) to learn 3D consistent representations without prior models purely from 2D data alone.

Very recently, the community has begun to explore how to combine these two philosophies. 3D computer vision tasks can benefit from the visual prior provided by generative image models. Generative models can learn powerful image priors with some notion of view-point consistency from solely 2D data and then be used to synthesize training data for 3D vision models. Physically-based priors from 3D vision can be used to guide generative image models as a strong explicit inductive prior towards more data-efficient and accurate visual representations of the world. On the other hand, modern generative models rely on huge training datasets and compute resources that, increasingly, are only available to large industrial research labs.

This Dagstuhl Seminar seeks to bring together communities of researchers from computer graphics, computer vision and machine learning in both industry and academia at this extremely timely moment in the progress of the field.

Copyright Bernhard Egger, Adam Kortylewski, William Smith, and Stefanie Wuhrer

Participants

Please log in to DOOR to see more details.

  • Thabo Beeler
  • Federica Bogo
  • Timo Bolkart
  • Neill Campbell
  • Andreea Dogaru
  • Bernhard Egger
  • Victoria Fernandez Abrevaya
  • James Gardner
  • Samara Ghrer
  • Marilyn Keller
  • Ron Kimmel
  • Tobias Kirschstein
  • Adam Kortylewski
  • Lingjie Liu
  • Ruoshi Liu
  • Shaifali Parashar
  • Or Patashnik
  • Ryan Po
  • Shunsuke Saito
  • William Smith
  • Siyu Tang
  • Ayush Tewari
  • Christian Theobalt
  • Gül Varol
  • Yaniv Wolf
  • Jiajun Wu
  • Stefanie Wuhrer

Related Seminars
  • Dagstuhl Seminar 19102: 3D Morphable Models (2019-03-03 - 2019-03-08) (Details)
  • Dagstuhl Seminar 22121: 3D Morphable Models and Beyond (2022-03-20 - 2022-03-25) (Details)

Classification
  • Computer Vision and Pattern Recognition
  • Graphics
  • Machine Learning

Keywords
  • Generative Models
  • Implicit Representation
  • Diffusion Models
  • Neural Rendering
  • Inverse Rendering