Dagstuhl-Seminar 23442
Approaches and Applications of Inductive Programming
( 29. Oct – 03. Nov, 2023 )
Permalink
Organisatoren
- Andrew Cropper (University of Oxford, GB)
- Luc De Raedt (KU Leuven, BE)
- Richard Evans (DeepMind - London, GB)
- Ute Schmid (Universität Bamberg, DE)
Kontakt
- Michael Gerke (für wissenschaftliche Fragen)
- Jutka Gasiorowski (für administrative Fragen)
Programm
Inductive programming (IP) is a special perspective on program synthesis, addressing learning programs from incomplete specifications such as input/output examples. The seminar "Approaches and Applications of Inductive Programming" (AAIP) took place in Dagstuhl for the sixth time. This Dagstuhl Seminar brings together researchers from different areas of artificial intelligence research, machine learning, formal methods, programming languages, cognitive science, and human-computer-interaction interested in methods and applications of IP. Focus topics of AAIP'23 have been neurosymbolic approaches to IP bringing together learning and reasoning, IP as a post-hoc approach to explaining decision-making of deep learning blackbox models, and exploring the potential of deep learning approaches, especially large language models such as OpenAI Codex for IP.
The focus topics have been introduced and discussed in a series of talks addressing neuro-symbolic IP, IP for learning in planning, explainable AI and IP, and IP and generative AI. Furthermore, a series of talks were dedicated to the relation of cognitive science to IP: Human-like few-shot learning via Bayesian reasoning over natural language, the child as hacker, using program synthesis to model strategy diversity in human visual reasoning, a neurodiversity-inspired solver for the Abstraction and Reasoning Corpus (ARC) using visual imagery and program synthesis, and using natural language for self-programming in cognitive architectures. The relation between IP and explainability has been highlighted with talks about explainable models via compression of relational ensembles, and effects of explaining machine-learned logic programs for human comprehension and discovery. Relations between IP and knowledge based methods have been addressed in a talk about learning disjointness axioms for knowledge graph refinement and for making knowledge graph embedding methods more robust. Methods of IP as an approach to learning interpretable rules have been presented with a focus on inductive logic programming (ILP), deep-rule learning, relational program synthesis with numerical reasoning, improving rule classifiers learned from quantitative data by recovering information lost by discretisation, meta-interpretive learning for generalised planning, probabilistic inductive logic programming, abstraction for answer set programs, anti-unification and generalization, programmatic reinforcement learning, and making program synthesis fast on a GPU. These talks have been complemented by several system demos presenting the ILP systems Popper and Louise, an RDF rules learner, and learning rules to sort e-mails into folders (EmFORE).
We identified four relevant research problems for current and future research in IP which were addressed in in-depth discussions in working groups and afterwards discussed in plenary sessions: (1) Large Language Models and Inductive Programming in Cognitive Architectures: one main outcome has been that combining learning and reasoning by integrating LLMs and reasoners in a cognitive architecture could be an enabler for validating programs that get executed by the overall architecture and to possible get nearer to human performance. (2) Avoiding too much search in Inductive Programming: It was noted that for IP in general we do need to learn structure as well as probabilities. Classic IP approaches focus on structure learning and - in contrast to neural network architectures - can learn recursion explicitly. The main result has been that suitable problem domains should be identified for systematic evaluation, such as string transformation which combine syntactic (e.g. return first letter) and semantic (e.g. give the capital of a country) transformations. (3) Finding Suitable Benchmark Problems for Inductive Programming: Here, the discussion from the second topic has been extended and systematised with the formulation of several relevant criteria for benchmark problems to evaluate IP approaches, among them problem domains which are not solvable by LLMs and solvable efficiently by humans. (4) Evaluation Criteria for Interpretability and Explainability of Inductive Programming: The main insight has been that the degree of interpretability and the quality of explanations is strongly context-dependent, being influenced by the recipient (who), the content (what), the information need and reason for an explanation (why), and the form of the explanation (how). Different candidates for metrics were identified, such as complexity measures, semantic coherence, and reliability of generated code.
In a final discussion round, several outcomes have been summarized and action points have been discussed. A crucial problem which might impact scientific progress as well as visibility could be that there is no core general approach to IP (such as gradient descent for neural networks). Relevant use cases might not have a focus on learning recursion/loops but on relations (e.g. in medicine and biology). The focus on learning programs (including recursion) might profit from using Python as the target language instead of more specific languages such as Prolog. Furthermore, current IP systems are mostly not easy to find and to use. Providing a toolbox which can be easily used (such as Weka for standard ML) might be helpful. There was a general agreement among the participants that the format of Dagstuhl Seminars is especially fruitful for bringing together the different perspectives on IP from machine learning, cognitive science, and program language research.
The goal of Inductive Programming (IP), also called inductive program synthesis, is to learn computer programs from data. IP is a special case of induction addressing the automated or semi-automated generation of a computer program from incomplete information, such as input-output examples, demonstrations (aka programming by example), or computation traces. Mostly, declarative (logic or functional) programs are synthesized and learned programs are often recursive. Examples are learning list manipulation programs, learning strategies for game playing, or learning constraints for scheduling problems. The goal of IP is to induce computer programs from data. IP interests researchers from many areas of computer science, including machine learning, automated reasoning, program verification, and software engineering. Furthermore, IP contributes to research outside computer science, notably in cognitive science, where IP can help build models of human inductive learning and contribute methods for intelligent tutor systems for programming education. IP is also of relevance for researchers in industry, providing tools for end-user programming such as the Microsoft Excel plug-in FlashFill.
Focus topics of the planned seminar will be on different aspects of neuro-symbolic approaches for IP, especially:
- Bringing together learning and reasoning,
- IP as a post-hoc approach to explaining decision-making of deep learning blackbox models, and
- exploring the potential of deep learning approaches, especially large language models such as OpenAI Codex for IP.
Furthermore, interactive approaches of IP will be discussed together with recent research on machine teaching. Potential applications of such approaches to end-user programming, as well as programming education will be explored based on cognitive science research on concept acquisition and human teaching.
Participants are encouraged to upload information about their research interests and topics they want to discuss before the seminar starts and also to browse the information offered by the other participants beforehand. The seminar is the sixth in a series which has started in 2013. A long-term objective of the seminar is to establish IP as a self-contained research topic in AI, especially as a field of ML and cognitive modelling. The seminar serves as a community-building event by bringing together researchers from different areas of IP, from different application areas such as end-user programming and tutoring and cognitive science research, especially from cognitive models of inductive (concept) learning. For successful community building, we seek to balance junior and senior researchers and to mix researchers from universities and industry.
- Lun Ai (Imperial College London, GB) [dblp]
- Martin Berger (University of Sussex - Brighton, GB) [dblp]
- David Cerna (The Czech Academy of Sciences - Prague, CZ) [dblp]
- David J. Crandall (Indiana University - Bloomington, US) [dblp]
- Claudia d'Amato (University of Bari, IT) [dblp]
- Luc De Raedt (KU Leuven, BE) [dblp]
- Sebastijan Dumancic (TU Delft, NL) [dblp]
- Kevin Ellis (Cornell University - Ithaca, US) [dblp]
- Nathanaël Fijalkow (CNRS - Talence, FR) [dblp]
- Bettina Finzel (Universität Bamberg, DE) [dblp]
- Johannes Fürnkranz (Johannes Kepler Universität Linz, AT) [dblp]
- Hector Geffner (UPF - Barcelona, ES) [dblp]
- Céline Hocquette (University of Oxford, GB) [dblp]
- Frank Jäkel (TU Darmstadt, DE) [dblp]
- Emanuel Kitzelmann (Technische Hochschule Brandenburg, DE) [dblp]
- Tomáš Kliegr (University of Economics - Prague, CZ) [dblp]
- Maithilee Kunda (Vanderbilt University - Nashville, US) [dblp]
- Johannes Langer (Universität Bamberg, DE) [dblp]
- Sriraam Natarajan (University of Texas at Dallas - Richardson, US) [dblp]
- Stassa Patsantzis (University of Surrey - Guildford, GB) [dblp]
- Josh Rule (University of California - Berkeley, US) [dblp]
- Zeynep G. Saribatur (TU Wien, AT) [dblp]
- Ute Schmid (Universität Bamberg, DE) [dblp]
- Gust Verbruggen (Microsoft - Keerbergen, BE) [dblp]
- Felix Weitkämper (LMU München, DE) [dblp]
Verwandte Seminare
- Dagstuhl-Seminar 13502: Approaches and Applications of Inductive Programming (2013-12-08 - 2013-12-11) (Details)
- Dagstuhl-Seminar 15442: Approaches and Applications of Inductive Programming (2015-10-25 - 2015-10-30) (Details)
- Dagstuhl-Seminar 17382: Approaches and Applications of Inductive Programming (2017-09-17 - 2017-09-20) (Details)
- Dagstuhl-Seminar 19202: Approaches and Applications of Inductive Programming (2019-05-12 - 2019-05-17) (Details)
- Dagstuhl-Seminar 21192: Approaches and Applications of Inductive Programming (2021-05-09 - 2021-05-12) (Details)
- Dagstuhl-Seminar 25491: Approaches and Applications of Inductive Programming (2025-11-30 - 2025-12-05) (Details)
Klassifikation
- Artificial Intelligence
- Human-Computer Interaction
- Machine Learning
Schlagworte
- Interpretable Machine Learning
- Neuro-symbolic AI
- Explainable AI
- Human-like Machine Learning
- Inductive Logic Programming