Dagstuhl Seminar 09401
Machine learning approaches to statistical dependences and causality
( Sep 27 – Oct 02, 2009 )
Permalink
Organizers
- Dominik Janzing (MPI für biologische Kybernetik - Tübingen, DE)
- Steffen Lauritzen (University of Oxford, GB)
- Bernhard Schölkopf (MPI für Intelligente Systeme - Tübingen, DE)
Contact
The 2009 Dagstuhl Seminar ``Machine Learning approaches to Statistical Dependences and Causality'', brought together 27 researchers from machine learning, statistics, and medicine.
Machine learning has traditionally been focused on prediction. Given observations that have been generated by an unknown stochastic dependency, the goal is to infer a law that will be able to correctly predict future observations generated by the same dependency. Statistics, in contrast, has traditionally focused on data modeling i.e. on the estimation of a probability law that has generated the data.
During recent years, the boundaries between the two disciplines have become blurred and both communities have adopted methods from the other, however, it is probably fair to say that neither of them has yet fully embraced the field of causal modeling, i.e. the detection of causal structure underlying the data. This has probably different reasons.
Many statisticians would still shun away from developing and discussing formal methods for inferring causal structure, other than through experimentation, as they would traditionally think of such questions as being outside statistical science and internal to any science where statistics is applied. Researchers in machine learning, on the other hand, have too long focused on a limited set of problems neglecting the mechanisms underlying the generation of the data, including issues like stochastic dependence and hypothesis testing --- tools that are crucial to current methods for causal discovery.
Since the Eighties there has been a community of researchers, mostly from statistics and philosophy, who in spite of the pertaining views described above have developed methods aiming at inferring causal relationships from observational data, building on the pioneering work of Glymour, Scheines, Spirtes, and Pearl. While this community has remained relatively small, it has recently been complemented by a number of researchers from machine learning. This introduces a new viewpoint to the issues at hand, as well as a new set of tools, such as novel nonlinear methods for testing statistical dependencies using reproducing kernel Hilbert spaces, and modern methods for independent component analysis.
The goal of the seminar was to discuss future strategies of causal learning, as well as the development of methods supporting existing causal inference algorithms, including recent developments lying on the border between machine learning and statistics such as novel tests for conditional statistical dependences.
The Seminar was divided into two blocks, where the main block was devoted to discussing state of the art and recent results in the field. The second block consisted of several parallel brainstorming sessions exploring potential future directions in the field. The main block contained 23 talks whose lengths varied between 1.5 hours and 10 minutes (depending on whether they were meant to be tutorials or more specific contributions)
Several groups presented recent approaches to causal discovery from non-interventional statistical data that significantly improve on state of the art methods. Some of them allow for better analysis of hidden common causes, others benefit from using methods from other branches of machine learning such as regression techniques, new independence tests, and independent component analysis. Scientists from medicine and brain research reported successful applications of causal inference methods in their fields as well as challenges for the future.
In the brainstorming sessions, the main questions were, among others, (1) formalizing causality (2) justifying concepts of simplicity in novel causal inference methods, (3) conditional independence testing for continuous domains.
Regarding (1), the question of an appropriate language for causality was crucial and involved generalizations of the standard DAG-based concept to chain-graphs, for instance. The session on item (2) addressed an important difference between causal learning to most of the other machine learning problems: Occam's Razor type arguments usually rely on the fact that simple hypotheses may perform better than complex ones even if the ``real world'' is complex because it prevents overfitting when only limited amount of data is present. The problem of causal learning, however, even remains in the infinite sample limit. The discussion on conditional independence testing (3) focused on improving recent kernel-based methods.
- Nihat Ay (MPI für Mathematik in den Naturwissenschaften, DE) [dblp]
- Nicolas Brodu (University of Rennes, FR)
- Laura E. Brown (Vanderbilt University - Nashville, US)
- Vanessa Didelez (University of Bristol, GB)
- Kenji Fukumizu (Institute of Statistical Mathematics - Tokyo, JP)
- Moritz Grosse-Wentrup (MPI für biologische Kybernetik - Tübingen, DE)
- Isabelle Guyon (ClopiNet - Berkeley, US) [dblp]
- Stefan Harmeling (MPI für biologische Kybernetik - Tübingen, DE) [dblp]
- Patrik O. Hoyer (University of Helsinki, FI)
- Aapo Hyvärinen (University of Helsinki, FI)
- Dominik Janzing (MPI für biologische Kybernetik - Tübingen, DE)
- Steffen Lauritzen (University of Oxford, GB) [dblp]
- Jan Lemeire (Free University of Brussels, BE)
- Philippe Leray (University of Nantes, FR)
- Joris Mooij (MPI für biologische Kybernetik - Tübingen, DE)
- Bertram Müller-Myhsok (MPI für Psychiatrie - München, DE)
- Helene Neufeld (University of Oxford, GB)
- Jonas Peters (MPI für biologische Kybernetik - Tübingen, DE) [dblp]
- Daniel Polani (University of Hertfordshire, GB) [dblp]
- Roland Ramsahai (London School of Hygiene and Tropical Medicine, GB)
- Kayvan Sadeghi (University of Oxford, GB)
- Bernhard Schölkopf (MPI für Intelligente Systeme - Tübingen, DE) [dblp]
- Ilya Shpitser (Harvard School of Public Health, US) [dblp]
- Peter Spirtes (Carnegie Mellon University, US)
- Bastian Steudel (MPI für Mathematik in den Naturwissenschaften, DE)
- Jin Tian (Iowa State University - Ames, US)
- Robert E. Tillman (Carnegie Mellon University, US)
Classification
- artificial intelligence
- robotics
- semantics
- specification
- formal methods
Keywords
- knowledge representation
- inference
- music
- musical data
- information retrieval
- music analysis
- digital editing of music
- music cognition
- computational musicology.