Dagstuhl Seminar 25202
Generative Models for 3D Vision
( May 11 – May 16, 2025 )
Permalink
Organizers
- Bernhard Egger (Friedrich-Alexander-Universität Erlangen-Nürnberg, DE)
- Adam Kortylewski (MPI für Informatik - Saarbrücken, DE and Universität Freiburg, DE)
- William Smith (University of York, GB)
- Stefanie Wuhrer (INRIA - Grenoble, FR)
Contact
- Marsha Kleinbauer (for scientific matters)
- Simone Schilke (for administrative matters)
Dagstuhl Seminar Wiki
- Dagstuhl Seminar Wiki (Use personal credentials as created in DOOR to log in)
Shared Documents
- Dagstuhl Materials Page (Use personal credentials as created in DOOR to log in)
Schedule
- Upload (Use personal credentials as created in DOOR to log in)
The rise of purely data-driven generative models, in particular generative adversarial networks, auto-regressive models, neural fields and diffusion models, has led to a step change in image synthesis quality. It is now possible to create photorealistic images with high level semantic control and solve many desirable use cases such as 2D inpainting. Whilst prior models were object specific (e.g. 3D Morphable Models of Faces), we now have generative models for images and videos that can represent various object classes and generate a huge variety of objects and scenes, even in different styles. The drawback of purely data-driven approaches is that the control and explainability provided by 3D and physically-based parameters is lost. It is also difficult (and perhaps prohibitively inefficient) to learn 3D consistent representations without prior models purely from 2D data alone.
Very recently, the community has begun to explore how to combine these two philosophies. 3D computer vision tasks can benefit from the visual prior provided by generative image models. Generative models can learn powerful image priors with some notion of view-point consistency from solely 2D data and then be used to synthesize training data for 3D vision models. Physically-based priors from 3D vision can be used to guide generative image models as a strong explicit inductive prior towards more data-efficient and accurate visual representations of the world. On the other hand, modern generative models rely on huge training datasets and compute resources that, increasingly, are only available to large industrial research labs.
This Dagstuhl Seminar seeks to bring together communities of researchers from computer graphics, computer vision and machine learning in both industry and academia at this extremely timely moment in the progress of the field.

Please log in to DOOR to see more details.
- Thabo Beeler
- Federica Bogo
- Timo Bolkart
- Neill Campbell
- Andreea Dogaru
- Bernhard Egger
- Victoria Fernandez Abrevaya
- James Gardner
- Samara Ghrer
- Marilyn Keller
- Ron Kimmel
- Tobias Kirschstein
- Adam Kortylewski
- Lingjie Liu
- Ruoshi Liu
- Shaifali Parashar
- Or Patashnik
- Ryan Po
- Shunsuke Saito
- William Smith
- Siyu Tang
- Ayush Tewari
- Christian Theobalt
- Gül Varol
- Yaniv Wolf
- Jiajun Wu
- Stefanie Wuhrer
Related Seminars
Classification
- Computer Vision and Pattern Recognition
- Graphics
- Machine Learning
Keywords
- Generative Models
- Implicit Representation
- Diffusion Models
- Neural Rendering
- Inverse Rendering