TOP
Search the Dagstuhl Website
Looking for information on the websites of the individual seminars? - Then please:
Not found what you are looking for? - Some of our services have separate websites, each with its own search option. Please check the following list:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminars
Within this website:
External resources:
  • DOOR (for registering your stay at Dagstuhl)
  • DOSA (for proposing future Dagstuhl Seminars or Dagstuhl Perspectives Workshops)
Publishing
Within this website:
External resources:
dblp
Within this website:
External resources:
  • the dblp Computer Science Bibliography


Dagstuhl Seminar 26102

Tensor Factorizations Meet Probabilistic Circuits

( Mar 01 – Mar 06, 2026 )

Permalink
Please use the following short url to reference this page: https://www.dagstuhl.de/26102

Organizers

Contact

Motivation

A number of recent successes in sub-fields of AI and ML are due to exploiting structured low-rank representations. These can be attributed to mainly two research communities: one working on tensor factorizations (TFs) and the other on probabilistic circuits (PCs).

For the former, the use of low-rank tensor factorizations for scaling large language models (LLMs), where adapters, structured matrices, and diffusion of compact polynomial representations as powerful inductive biases are prominent examples. Furthermore, tensor networks are widely used to solve and accelerate physics-related problems and quantum computing. For the latter, PCs have emerged in the last decade to provide tractable probabilistic inference with guarantees, which is especially important in safety-critical applications, and a compositional way to automate probabilistic inference and reliable neuro-symbolic AI.

Techniques from both communities rely on the same core principles—structured computational graphs encoding low-rank tensors—which is key to scale computations to high dimensions as well as to provide closed-form solutions to many quantities of interest. Despite this common ground, each community has developed a different syntax, graphical representations, and jargon to describe very similar techniques, representations, and algorithms. This limits scientific advancements and interdisciplinary collaborations. For example, only recently an initial connection between tensor networks and circuits has been established, revealing that the two communities have been developing similar methods independently. We believe that connecting these different communities, and giving them a single stage to present and discuss in-depth their respective advancements, can further propel breakthroughs and foster cross-pollination in AI and ML. We believe that a week-long seminar for experts from these communities constitutes the perfect venue to exchange perspectives, deeply discuss the recent advancements, and build strong bridges that can greatly propel interdisciplinary research.

We will be emphasizing the theoretical connections between TFs and PCs as well as opportunities across both fields and provide space to members of the different communities to align their vocabularies and share ideas. Both hierarchical tensor factorizations and PCs have been introduced as alternative representations of probabilistic graphical models, and the connection between certain circuits and factorizations has been hinted in some works. However, they differ in how they are applied: TFs are usually used in tasks where a ground-truth tensor to approximate is available or a dimensionality reduction problem can be formulated (aka tensor sketch), whereas PCs are usually learned from data in the same spirit generative models are trained. Similar to TFs, however, modern PC representations are overparameterized and usually encoded as a collection of tensors as to leverage parallelism and modern deep learning frameworks. This begs the following questions, which we plan to answer during this Dagstuhl Seminar: (A) What are the formal connections between PCs and TFs? (B) How can we extend TFs (and PCs) to handle non-linear problems commonly encountered in ML and AI? (C) Under which properties are TFs and PCs provably sufficient to tractably compute queries of interest? (D) How far can we scale low-rank representations on modern software and hardware? (E) How can we harness recent advancements in TFs and PCs for reliable probabilistic reasoning e.g., in the field of neuro-symbolic AI?

Copyright Grigorios Chrysos, Robert Peharz, Volker Tresp, and Antonio Vergari

Classification
  • Artificial Intelligence
  • Machine Learning

Keywords
  • tensor factorizations
  • probabilistic circuits
  • tractable models
  • low-rank representations