Dagstuhl Seminar 22422
Developmental Machine Learning: From Human Learning to Machines and Back
( Oct 16 – Oct 21, 2022 )
Permalink
Organizers
- Pierre-Yves Oudeyer (INRIA - Bordeaux, FR)
- James M. Rehg (Georgia Institute of Technology - Atlanta, US)
- Linda B. Smith (Indiana University - Bloomington, US)
- Sho Tsuji (University of Tokyo, JP)
Contact
- Marsha Kleinbauer (for scientific matters)
- Susanne Bach-Bernhard (for administrative matters)
Schedule
Recent advances in artificial intelligence, enabled by large-scale datasets and simulation environments, have resulted in breakthrough improvements in areas like object and speech recognition, 3D navigation, and machine translation. In spite of these advances, modern artificial learning systems still pale in comparison to the competencies of young human learners. The differences between human learning and the paradigms that currently guide machine learning are striking. For example, children actively identify both the concepts to be learned and the data items used for learning, they infer the labels for learning from ambiguous perceptual data, and they learn from continuous streams of percepts without storing and curating large datasets. Artificial intelligence researchers are increasingly looking to developmental science for ideas and inspiration to improve machine learning, while developmental scientists are adopting tools from data science and machine learning to analyze large datasets and gain insights into developmental processes.
This seminar created new connections between the developmental and machine learning research communities by bringing together researchers in linguistics, psychology, cognitive science and neuroscience with investigators working in computer vision, machine learning and robotics. The seminar focused on three research questions:
- What are the key computational problems and challenges that need to be addressed in creating a developmentally-inspired machine learner? Existing machine learning methods are built on a set of canonical problem formulations such as supervised learning and reinforcement learning. At the same time, decades of research in developmental science have produced an increasingly detailed characterization of learning in children. How can we leverage these insights to create new and more powerful machine learners and revise standard ML problem formulations?
- What criteria are necessary for agent-based simulation models of development to advance machine learning and provide useful tests of developmental hypotheses? Advances in computer graphics and physics simulation have made it possible to create synthetic environments for training reinforcement learning agents to perform developmentally-relevant cognitive tasks such as navigating 3D space and manipulating objects. Can such computational experiments serve as useful tests of developmental hypotheses?
- How can data-driven computational models be used to advance developmental science? It is increasingly feasible to collect dense sensor data that captures the perceptual inputs children receive (e.g. via wearable cameras and eye trackers), their behaviors during naturalistic interactions, and a variety of contextual variables relevant to cognitive tasks. These rich datasets, in conjunction with advances in deep learning have created the opportunity to create machine learning models which can "solve" certain developmental tasks such as object recognition. Given that such deep models do not speak directly to mechanisms of human learning, how can such research advance developmental science?
Through a seminar program consisting of tutorials, talks, working group meetings, and an early career mentorship sessions, we gained interdisciplinary insights into these core research questions. Attendees discussed the potential research directions that different research disciplines can benefit from each other, as well as collaboration opportunities and future development of the community. As the initial step, we aim to connect interested researchers online through social media and provide a common repository for relevant literature.
Recent advances in artificial intelligence, enabled by large-scale datasets and simulation environments, have resulted in breakthrough improvements in areas like object and speech recognition, 3D navigation, and machine translation. In spite of these advances, modern artificial learning systems still pale in comparison to the competencies of young human learners. The differences between human learning and the paradigms that currently guide machine learning are striking. For example, children actively identify both the concepts to be learned and the data items used for learning, they infer the labels for learning from ambiguous perceptual data, and they learn from continuous streams of percepts without storing and curating large datasets. Artificial intelligence researchers are increasingly looking to developmental science for ideas and inspiration to improve machine learning, while developmental scientists are adopting tools from data science and machine learning to analyze large datasets and gain insights into developmental processes.
This Dagstuhl Seminar will provide an opportunity to catalyze new connections between the developmental and machine learning research communities by bringing together researchers in linguistics, psychology, cognitive science and neuroscience with investigators working in computer vision, machine learning and robotics. Our goal is to accelerate both 1) the use of developmental insights to spur advances in machine learning; and 2) the use of computational models and data-driven learning to gain novel tools and insights for studying development. The seminar will focus on three research questions:
- What are the key computational problems and challenges that need to be addressed in creating a developmentally-inspired machine learner? Existing machine learning methods are built on a set of canonical problem formulations such as supervised learning and reinforcement learning. At the same time, decades of research in developmental science have produced an increasingly detailed characterization of learning in children. How can we leverage these insights to create new and more powerful machine learners and revise standard ML problem formulations?
- What criteria are necessary for agent-based simulation models of development to advance machine learning and provide useful tests of developmental hypotheses? Advances in computer graphics and physics simulation have made it possible to create synthetic environments for training reinforcement learning agents to perform developmentally-relevant cognitive tasks such as navigating 3D space and manipulating objects. Can such computational experiments serve as useful tests of developmental hypotheses?
- How can data-driven computational models be used to advance developmental science? It is increasingly feasible to collect dense sensor data that captures the perceptual inputs children receive (e.g. via wearable cameras and eye trackers), their behaviors during naturalistic interactions, and a variety of contextual variables relevant to cognitive tasks. These rich datasets, in conjunction with advances in deep learning have created the opportunity to create machine learning models which can “solve” certain developmental tasks such as object recognition. Given that such deep models do not speak directly to mechanisms of human learning, how can such research advance developmental science?
We aim to gain interdisciplinary insights into these questions through a seminar program consisting of tutorials, lectures, working group meetings, and an early career mentorship poster session.
- Thomas Carta (INRIA - Bordeaux, FR)
- David J. Crandall (Indiana University - Bloomington, US) [dblp]
- Alejandrina Cristia (LSCP - Paris, FR)
- Rhodri Cusack (Trinity College Dublin, IE)
- Hana D'Souza (Cardiff University, GB)
- Maureen de Seyssel (INRIA & ENS Paris, FR)
- Emmanuel Dupoux (LSCP - Paris, FR)
- Abdellah Fourtassi (Aix-Marseille University, FR)
- Michael C. Frank (Stanford University, US)
- Hiromichi Hagihara (University of Tokyo, JP)
- Uri Hasson (Princeton University, US)
- Felix Hill (Google DeepMind - London, GB)
- Judy Hoffman (Georgia Institute of Technology - Atlanta, US) [dblp]
- Celeste Kidd (University of California - Berkeley, US)
- Eon-Suk Ko (Chosun University, KR)
- Maithilee Kunda (Vanderbilt University, US) [dblp]
- Marvin Lavechin (Meta AI - Paris, FR)
- Casey Lew-Williams (Princeton University, US)
- Atsushi Nakazawa (Kyoto University, JP)
- Pierre-Yves Oudeyer (INRIA - Bordeaux, FR) [dblp]
- Marc'Aurelio Ranzato (DeepMind - London, GB) [dblp]
- James M. Rehg (Georgia Institute of Technology - Atlanta, US) [dblp]
- Clement Romac (INRIA - Bordeaux, FR)
- Rebecca Saxe (MIT - Cambridge, US)
- Olivier Sigaud (Sorbonne University - Paris, FR)
- Stefan Stojanov (Georgia Institute of Technology - Atlanta, US)
- Jelena Sucevic (University of Oxford, GB)
- Daniel Swingley (University of Pennsylvania, US)
- Ngoc Anh Thai (Georgia Institute of Technology - Atlanta, US)
- Ingmar Visser (University of Amsterdam, NL)
- Anne Warlaumont (UCLA, US)
- Gert Westermann (Lancaster University, GB)
- Chen Yu (University of Texas - Austin, US)
Classification
- Computer Vision and Pattern Recognition
- Machine Learning
- Neural and Evolutionary Computing
Keywords
- Embodied AI
- Self-supervised Learning
- Virtual Agent Simulations
- Computational Modeling of Infant Learning
- Developmental Psychology