Dagstuhl Seminar 23151
Normative Reasoning for AI
( Apr 10 – Apr 14, 2023 )
Permalink
Organizers
- Agata Ciabattoni (TU Wien, AT)
- John F. Horty (University of Maryland - College Park, US)
- Marija Slavkovik (University of Bergen, NO)
- Leon van der Torre (University of Luxembourg, LU)
Contact
- Marsha Kleinbauer (for scientific matters)
- Christina Schwarz (for administrative matters)
Schedule
Normative reasoning - or, roughly, reasoning about such normative matters as obligations, permissions, and rights - is receiving increasing attention in several fields related to AI and computer science. There is an increase in its more traditional use in knowledge representation and reasoning, multiagent systems, and AI & law. However, it holds much promise and is also becoming more important in the context of the blooming fields of AI ethics and explainable AI. Accordingly, the interdisciplinary seminar Normative Reasoning for Artificial Intelligence brought together researchers working in knowledge representation and reasoning, multiagent systems, AI & law, AI ethics, and explainable AI to discuss ways in which normative reasoning can be used to make progress in the latter two disciplines.
While this Dagstuhl Seminar touched upon many different aspects of normative reasoning in AI, four topics received particular attention: (i) from AI & law to AI ethics, (ii) deontic explanations, (iii) defeasible deontic logic and formal argumentation, and (iv) from theory to tools.
From AI & law to AI ethics. AI & law is a field that is concerned with, on the one hand, laws that regulate the use and development of artificial intelligence and, on the other, the use of AI by lawyers and the impact of AI on the legal profession. In this field, normative systems are often used to represent and reason about the legal code. The seminar participants explored different ways in which ideas from AI & law can be used in the context of AI ethics.
Deontic explanations. This topic had to do with the use of formal methods, in general, and deontic logic and the theory of normative systems, in particular, to provide answers to why questions involving deontic expressions: „Why must I wear a face mask?“, „Why is it forbidden for me to go out at night, although that other person is allowed to go out at night?“, „Why has the law of privacy been changed in this way?“. Deontic explanations have an essentially practical nature, which distinguishes them from (merely) scientific explanations. The concerns of scientific explanations focus on causality and uncertainty, whereas deontic explanations additionally include preferences, norms, sanctions, and actions. While causality and uncertainty are core concerns in explainable AI, in the context of our seminar, they played a relatively minor role. Instead, the seminar focused on the aspects of deontic explanations that are special to deontic explanations.
Defeasible deontic logic and formal argumentation. The third topic of the seminar had to do with the role of nonmonotonicity in deontic logic in general and the use of formal argumentation in particular. As is well known in the area of deontic logic, normative reasoning comes with its own set of benchmark examples and challenges, many of which are concerned with the handling of the so-called contrary-to-duty (CTD) reasoning and deontic conflicts. A whole plethora of formal methods have been developed to handle CTD and deontic conflicts, methods that go far beyond simple modal logics such as SDL (standard deontic logic). Furthermore, it is widely held that norms are defeasible and come with exceptions and priorities. The seminar participants discussed the role of nonmonotonicity in deontic logic and the use of techniques from formal argumentation to define defeasible deontic logics.
From theory to tools. The fourth topic of the seminar concerned experimenting and implementing normative reasoning. One of the themes discussed had to do with integrating normative reasoning techniques with reinforcement learning (RL) in the design of ethical autonomous agents. Another theme that was discussed had to do with the automatization of deontic explanations. For example, in the recently introduced Logikey framework, it has been shown how Isabelle/HOL can be used as flexible interactive testbed for the design of domain-specific logical formalisms. Isabelle/HOL incorporates a number of automated tools that provide just-in-time feedback (counter-models, examples, proofs) to the formalization process. This feedback can be used to assess and reflect upon the theoretical properties of the system being designed/implemented. We can encode complex semantics in Isabelle/HOL as well as notions of argumentation (already partly done for abstract argumentation) so that Isabelle/HOL is turned into a reasoning system for those specific formalisms. What's more; notions of deontic explanations can be encoded and experimented with. Another key tool for automatize normative reasoning is analytic proof systems, which were also discussed in the seminar.
Normative reasoning is reasoning about normative matters, such as obligations, permissions, and the rights of individuals or groups. It is prevalent in both legal and ethical discourse, and it can – arguably, should – play a crucial role in the construction of autonomous agents. We often find it important to know whether specific norms apply in a given situation, to understand why and when they apply, and why some other norms do not apply. In most cases, our reasons are purely practical – we want to make the correct decision – but they can also be theoretical – as they are in theoretical ethics. Either way, the same questions are crucial in designing autonomous agents responsibly.
This Dagstuhl Seminar will bring together experts in computer science, logic, philosophy, ethics, and law with the overall goal of finding effective ways of embedding normative reasoning in AI systems. While aiming to keep an eye on every aspect of normative reasoning in AI, four topics will more specifically be the focus.
Normative reasoning for AI ethics. The first topic is concerned with the question of how the use of normative reasoning in existing fields like bioethics and AI & law can inspire the new area of AI ethics. Modern bioethics, for instance, has developed in response to the problems with applying high moral theory to concrete cases: the fact that we don’t know which ethical theory is true, that it’s often unclear how high-level ethical theories would resolve a complex case; and the fact that the principle of publicity demands that we justify the resolution to a problem in a way that most people can understand. Reacting to these problems, the field of bioethics has moved away from top-down applications of high moral theory toward alternative approaches with their own unique methods. These approaches are meant to be useful for reasoning about and resolving concrete cases, even if we don’t know which ethical theory is true. Since AI ethics faces similar problems, we believe that a better understanding of approaches in bioethics holds much promise for future research in AI ethics.
Deontic explanations. The second topic is concerned with the use of formal methods in general and deontic logic and the theory of normative systems in particular in providing deontic explanations or answers to why questions with deontic content, that is, questions like “Why must I wear a face mask?”, “Why am I forbidden to leave the house at night, while he is not?”, “Why has the law of privacy been changed in this way?” Deontic explanations are called for in widely different contexts – including individual and institutional decision-making, policy-making, and retrospective justifications of actions – and so there is a wide variety of them. Nevertheless, they are unified by their essentially practical nature.
Defeasible deontic logic and formal argumentation. The third topic of the seminar is concerned with the role of nonmonotonicity in deontic logic and the potential use of formal argumentation. In the area of deontic logic, normative reasoning is associated with a set of well-known benchmark examples and challenges many of which have to do with the handling of contrary-to-duty scenarios and deontic conflicts. While a plethora of formal methods has been developed to account for contrary-to-duty reasoning and to handle deontic conflicts, many challenges remain open. One specific goal of the seminar is to reflect on the role of nonmonotonicity in deontic logic, as well as the use of techniques from formal argumentation to define defeasible deontic logic that would address the open challenges.
From theory to tools. The fourth topic of the seminar concerns implementing and experimenting with normative reasoning. One of the themes we plan to discuss is the integration of normative reasoning techniques with reinforcement learning in the design of ethical autonomous agents. Another one is the automatization of deontic explanations using Logikey and other frameworks.
- Guillaume Aucher (University of Rennes, FR) [dblp]
- Kevin Baum (DFKI - Saarbrücken, DE) [dblp]
- Christoph Benzmüller (Universität Bamberg, DE) [dblp]
- Jan M. Broersen (Utrecht University, NL) [dblp]
- Pedro Cabalar (University of Coruña, ES) [dblp]
- Ilaria Canavotto (University of Maryland - College Park, US) [dblp]
- Agata Ciabattoni (TU Wien, AT) [dblp]
- Célia da Costa Pereira (Université Côte d’Azur - Sophia Antipolis, FR) [dblp]
- Mehdi Dastani (Utrecht University, NL) [dblp]
- Louise A. Dennis (University of Manchester, GB) [dblp]
- Frank Dignum (University of Umeå, SE) [dblp]
- Virginia Dignum (University of Umeå, SE) [dblp]
- Huimin Dong (Sun Yat-Sen University - Zhuhai, CN) [dblp]
- Thomas Eiter (TU Wien, AT) [dblp]
- Eleonora Giunchiglia (TU Wien, AT)
- Guido Governatori (Tarragindi, AU) [dblp]
- John F. Horty (University of Maryland - College Park, US) [dblp]
- Joris Hulstijn (University of Luxembourg, LU) [dblp]
- Aleks Knoks (University of Luxembourg, LU) [dblp]
- Emiliano Lorini (CNRS - Toulouse, FR) [dblp]
- Bertram F. Malle (Brown University - Providence, US) [dblp]
- Réka Markovich (University of Luxembourg, LU) [dblp]
- Eric Pacuit (University of Maryland - College Park, US) [dblp]
- Xavier Parent (TU Wien, AT) [dblp]
- Bijan Parsia (University of Manchester, GB) [dblp]
- Adrian Paschke (FU Berlin, DE) [dblp]
- Henry Prakken (Utrecht University, NL) [dblp]
- Antonino Rotolo (University of Bologna, IT) [dblp]
- Ken Satoh (National Institute of Informatics - Tokyo, JP) [dblp]
- Marija Slavkovik (University of Bergen, NO) [dblp]
- Kai Spiekermann (London School of Economics, GB) [dblp]
- Christian Straßer (Ruhr-Universität Bochum, DE)
- Leon van der Torre (University of Luxembourg, LU) [dblp]
Classification
- Artificial Intelligence
- Logic in Computer Science
- Multiagent Systems
Keywords
- deontic logic
- autonomous agents
- AI ethics
- deontic explanations