Dagstuhl-Seminar 16222
Engineering Moral Agents – from Human Morality to Artificial Morality
( 29. May – 03. Jun, 2016 )
Permalink
Organisatoren
- Michael Fisher (University of Liverpool, GB)
- Christian List (London School of Economics, GB)
- Alan FT Winfield (University of the West of England - Bristol, GB)
Koordinator
- Marija Slavkovik (University of Bergen, NO)
Kontakt
- Simone Schilke (für administrative Fragen)
Presse/News
Impacts
- Dagstuhl Manifesto : Engineering Moral Machines : pp. 467-472 : article - Fisher, Michael; List, Christian; Slavkovik, Marija; Winfield, Alan F. T. - Berlin : Springer, 2016 - (Informatik Spektrum : 36. 2016, 6).
- How Did They Know? Model-Checking for Analysis of Information Leakage in Social Networks : article in LNAI 10315 : pp. 42-59 - Dennis, Louise A.; Slavkovik, Marija; Fisher, Michael - Berlin : Springer, 2017 - (Lecture notes in artificial intelligence ; 10315 : article).
Programm
Context-aware, autonomous, and intelligent systems are becoming a presence in our society and are increasingly involved in decisions that affect our lives. Humanity has developed formal legal and informal moral and societal norms to govern its own social interactions. There exist no similar regulatory structures that can be applied by non-human agents. Artificial morality, also called machine ethics, is an emerging discipline within artificial intelligence concerned with the problem of designing artificial agents that behave as moral agents, i.e., adhere to moral, legal, and social norms.
Most work in artificial morality, up to the present date, has been exploratory and speculative. The hard research questions in artificial morality are yet to be identified. Some of such questions are: How to formalize, “quantify", qualify, validate, verify and modify the “ethics" of moral machines? How to build regulatory structures that address (un)ethical machine behavior? What are the wider societal, legal, and economic implications of introducing these machines into our society? It is evident that close interdisciplinary collaboration is necessary. Since robots and artificial beings entered fiction before they became a reality, for people outside artificial intelligence research views on the current and future limits of artificial intelligence are often distorted by science fiction and the popular media. Therefore, without being aware of the state-of-the-art in artificial intelligence and engineering, it is impossible to assess the necessary and sufficient conditions for an artificial intentional entity to be considered an artificial moral agent, and very challenging even with a deep understanding.
We expect the seminar to give researchers across the contributing disciplines an integrated overview of current research in machine morality and related topics. We hope to open up a cross-disciplinary communication channel among researchers tackling artificial morality. We intend to work towards identifying the central research questions and challenges concerning
- the definition and operationalization of the concept of moral agency, as it applies to human and non-human systems;
- the formalization and algorithmization of ethical theories;
- the formal verifiability of machine ethics; and
- the regulatory structures that should govern the role of artificial agents and machines in our society.
- Können Computer moralische Prinzipien berücksichtigen?
Article about this seminar published in the "Saarbrücker Zeitung" on June 8, 2016 (in German) - Können Computer moralische Prinzipien berücksichtigen?
Press release in German
Artificial morality, also called "machine ethics", is an emerging field in artificial intelligence that explores how artificial agents can be enhanced with sensitivity to and respect for the legal, social, and ethical norms of human society. This field is also concerned with the possibility and necessity of transferring the responsibility for the decisions and actions of the artificial agents from their designers onto the agents themselves. Additional challenging tasks include, but are not limited to: the identification of (un)desired ethical behaviour in artificial agents and its adjustment; the certification and verification of the artificial agents' ethical capacities; the identification of the adequate level of responsibility of an artificial agent; the dependence between the responsibility and the level of autonomy that an artificial agent possesses; and the place of artificial agents within our societal, legal, and ethical normative systems.
Artificial morality has become increasingly salient since the early years of this century, though its origins are older. Isaac Asimov already famously proposed three laws of robotics, requiring that, first, robots must not harm humans or allow them to be harmed; second, robots must obey human orders provided this does not conflict with the first law; and third, robots must protect themselves provided this does not conflict with the first two laws.
Although there has been some discussion and analysis of possible approaches to artificial morality in computer science and related fields, the "algorithmization" and adaptation of the ethical systems developed for human beings is both an open research problem and a difficult engineering challenge. At the same time, formally and mathematically oriented approaches to ethics are attracting the interest of an increasing number of researchers, including in philosophy. As this is still in its infancy, we thought that the area could benefit from an "incubator event" such as an interdisciplinary Dagstuhl seminar. We conducted a five-day seminar with twenty six participants with diverse academic backgrounds including robotics, automated systems, philosophy, law, security, and political science. The first part of the seminar was dedicated to facilitating the cross-disciplinary communication by giving researchers across the contributing disciplines an integrated overview of current research in machine morality from the artificial intelligence side, and of relevant areas of philosophy from the moral-philosophy, action-theoretic, and social-scientific side. We accomplished this through tutorials and brief self-introductory talks. The second part of the seminar was dedicated to discussions around two key topics: how to formalise ethical theories and reasoning, and how to implement ethical reasoning. This report summarises some of the highlights of those discussions and includes the abstracts of the tutorials and some of the self-introductory talks. We also summarise our conclusions and observations from the seminar.
Although scientists without a philosophical background tend to have a general view of moral philosophy, a formal background and ability to pinpoint key advancements and central work in it cannot be taken for granted. Kevin Baum from the University of Saarland presented a project currently in progress at his university and in which he is involved, of teaching formal ethics to computer-science students. There was great interest in the material of that course from the computer science participants of the seminar. In the first instance, a good catalyst for the computer science--moral philosophy cooperation would be a comprehensive "data base" of moral-dilemma examples from the literature that can be used as benchmarks when formalising and implementing moral reasoning.
The formalisation of moral theories for the purpose of using them as a base for implementing moral reasoning in machines, and artificial autonomous entities in general, was met with great enthusiasm among non-computer scientists. Such work gives a unique opportunity to test the robustness of moral theories.
It is generally recognised that there exist two core approaches to artificial morality: explicitly constraining the potentially immoral actions of the AI system; and training the AI system to recognise and resolve morally challenging situations and actions. The first, constrained-based approach consists in finding a set of rules and guidelines that the artificial intentional entity has to follow, or that we can use to pre-check and constrain its actions. By contrast, training approaches consist in applying techniques such as machine learning to "teach" an artificial intentional entity to recognise morally problematic situations and to resolve conflicts, much as people are educated by their carers and community to become moral agents. Hybrid approaches combining both methods were also considered.
It emerged that a clear advantage of constraining the potentially immoral actions of the entity, or the "symbolic approach" to ethical reasoning, is the possibility to use formal verification to test that the reasoning works as intended. If the learning approach is used, the learning should happen before the autonomous system is deployed for its moral behaviour to be tested. Unfortunately, the machine-learning community was severely under-represented at the seminar, and more efforts should be devoted to include them in future discussions. The discussions also revealed that implanting moral reasoning into autonomous systems opens up many questions regarding the level of assurance that should be given to users of such systems, as well as the level of transparency into the moral-reasoning software that should be given to users, regulators, governments, and so on.
Machine ethics is a topic that will continue to develop in the coming years, particularly with many industries preparing to launch autonomous systems into our societies in the next five years. It is essential to continue open cross-disciplinary discussions to make sure that the machine reasoning implemented in those machines is designed by experts who have a deep understanding of the topic, rather than by individual companies without the input of such experts. It was our impression as organisers, perhaps immodest, that the seminar advanced the field of machine ethics and opened new communication channels. Therefore we hope to propose a second seminar in 2018 on the same topic, using the experience and lessons we gained here, to continue the discussion and flow of cross-disciplinary collaboration.
- Michael Anderson (University of Hartford, US) [dblp]
- Albert Anglberger (LMU München, DE) [dblp]
- Zohreh Baniasadi (University of Luxembourg, LU) [dblp]
- Kevin Baum (Universität des Saarlandes, DE) [dblp]
- Vincent Berenz (MPI für Intelligente Systeme - Tübingen, DE) [dblp]
- Jan M. Broersen (Utrecht University, NL) [dblp]
- Vicky Charisi (University of Twente, NL) [dblp]
- Louise A. Dennis (University of Liverpool, GB) [dblp]
- Sjur K. Dyrkolbotn (Utrecht University, NL) [dblp]
- Michael Fisher (University of Liverpool, GB) [dblp]
- Joseph Y. Halpern (Cornell University - Ithaca, US) [dblp]
- Holger Hermanns (Universität des Saarlandes, DE) [dblp]
- Johannes Himmelreich (HU Berlin, DE)
- John F. Horty (University of Maryland - College Park, US) [dblp]
- Susan Leigh Anderson (University of Connecticut, US) [dblp]
- Robert Lieck (Universität Stuttgart, DE) [dblp]
- Christian List (London School of Economics, GB) [dblp]
- Andreas Matthias (Lingnan University - Hong Kong, HK) [dblp]
- James H. Moor (Dartmouth College Hanover, US) [dblp]
- Marcus Pivato (University of Cergy-Pontoise, FR) [dblp]
- Marek Sergot (Imperial College London, GB) [dblp]
- Marija Slavkovik (University of Bergen, NO) [dblp]
- Janina Sombetzki (Universität Wien, AT)
- Kai Spiekermann (London School of Economics, GB) [dblp]
- Alan FT Winfield (University of the West of England - Bristol, GB) [dblp]
- Roman V. Yampolskiy (University of Louisville, US) [dblp]
Verwandte Seminare
Klassifikation
- artificial intelligence / robotics
- semantics / formal methods
- verification / logic
Schlagworte
- Artificial Morality
- Machine Ethics
- Computational Morality
- Autonomous Systems
- Intelligent Systems
- Formal Ethics
- Mathematical Philosophy
- Robot Ethics.