Dagstuhl-Seminar 13231
Belief Change and Argumentation in Multi-Agent Scenarios
( 02. Jun – 07. Jun, 2013 )
Permalink
Organisatoren
- Jürgen Dix (TU Clausthal, DE)
- Sven Ove Hansson (KTH Royal Institute of Technology, SE)
- Gabriele Kern-Isberner (TU Dortmund, DE)
- Guillermo R. Simari (National University of the South - Bahía Blanca, AR)
Kontakt
Impacts
- Belief Change and Argumentation in Multi-Agent Scenarios : special issue - Dix, Jürgen; Hansson, Sven Ove; Kern-Isberner, Gabriele; Simari, Guillermo R. - Amsterdam : Elsevier, 2016. - pp. 177-360 - (Annals of mathematics and artificial intelligence ; 78. 2016, 3/4).
- Constructing Argument Graphs with Deductive Arguments : A Tutorial : article pp. 5-30 - Besnard, Philippe; Hunter, Anthony - London : Taylor & Francis, 2014. - pp. 5-30 - (Argument & Computation ; 5. 2014, 1).
Programm
Multiagent systems have become increasingly relevant as a general paradigm for distributed problem solving and for the cooperation of autonomous intelligent agents. Acting in uncertain and dynamic scenarios demands the capability of reasoning defeasibly, i.e., the agents should be capable of revising hypotheses or previously made conclusions to maintain an adequate, consistent model of the world. Belief change theory addresses all kinds of epistemic changes in response to new information or changes in the world. It emerged from the basic AGM-theory of belief revision (named after its founders Alchourron, Gärdenfors, and Makinson) in 1985 and has been broadened considerably to deal with iterated change operations in different frameworks such as classical logics, conditional and default logics, logic programming and Horn clause logic, as well as probabilistic logic and ontologies, with close relationships to plausible reasoning. Argumentation theory also provides frameworks for reasoning, by setting up formal structures that allow the processing and evaluation of arguments in favor of or against a certain option. Here, the focus is more on dialectical deliberation and on finding justifications for decisions. Therefore, argumentation theory is particularly useful for decision making, both in a single agent and within a group of agents. It also offers natural methodological frameworks for distributed and collaborative decision making, or for solving conflicts between agents, by evaluating opposite statements and views. Proposed approaches to argumentation range from very abstract argumentation frameworks to specialized argumentation systems, e.g., for applications in law.
Belief revision and argumentation can be looked upon as two dimensions of reasoning that provide complementary possibilities. While belief revision investigates methods and quality criteria that address the problem of how beliefs should be changed or adjusted in a rational way, argumentation seeks for justifications that show why beliefs are established and options should be chosen. By combining them, researchers are able to address both dimensions of reasoning and can solve problems that exceed the scope of either area of research. Moreover, since reasoning and decision making are core components of modern agent architectures, closer links between these two components are apt to improve coherence within the agent model. From a more theoretical point of view, elaborating the connections between argumentation and belief revision and making the techniques of either domain usable for the other will lead to substantial progress in the scientific field of knowledge representation.
This seminar aims to bring together researchers from the fields of argumentation theory and belief change theory, both from philosophy and computer science, to present recent research results and exchange ideas for combining argumentation and belief change. Studies of the relationships between both areas have started in recent years, with the aim of advancing the state of the art within one field with methods and techniques of the other. So, the proposed seminar will foster pioneering work in the combination of two important and very active subfields in AI research, with the expected outcome being the improvement of the possibilities of advanced reasoning techniques for multiagent systems. It will provide an interdisciplinary platform to attract high-quality expertise from philosophy and computer science in order to discuss and cooperate on solutions to current problems in argumentation and belief revision.
Belief change and argumentation theory both belong to the wide field of knowledge representation, but their focal points are different. Argumentation theory provides frameworks for reasoning by setting up formal structures that allow the processing and evaluation of arguments for or against a certain option. Here, focus is put on dialectical deliberation and on finding justifications for decisions. Belief change theory has its focus on the adjustments of previously held beliefs that are needed in such processes. However, the interrelations between the two fields are still for the most part unexplored.
Both the fields of argumentation theory and belief revision are of substantial relevance for multi-agent systems which are facing heavy usage in industrial and other practical applications in diverse areas, due to their appropriateness for realizing distributed autonomous systems. Moreover, the topics of this seminar address recent research questions in the general area of decision making and are innovative in the combination of methods.
The seminar took place June 3rd--7th 2013, with 39 participants from 16 countries. The program included overview talks, individual presentations by the participants and group discussions. Overview talks ranged from 30 to 35 minutes, individual presentations were about 25 minutes long, including questions. We specifically asked participants not to present current research (their next conference paper), but rather asked to relate their research to argumentation/belief revision and how it could be used in agent theories.
Participants were encouraged to use their presentations to provide input for the discussion groups. We organized two discussion groups that each met twice (they took place in the afternoon, before and after the coffee break). Each group was headed by two organizers as discussion leaders (see Section 4).
The seminar concluded with the presentation of the group discussions on Friday morning and a wrap-up of the seminar.
From the discussion groups, some core topics arose which will help to focus further scientific work: Semantical issues concerning belief revision and argumentation were seen to be of major importance, and a layered view on both argumentation and belief revision, separating the underlying logic from the argumentation layer resp. revision layer helped to provide common grounds for the two communities. Both these topics proved to be very successful to stimulate scientific discourse, gave rise to interesting questions that might lead to papers and projects in the future, and look promising to allow a deeper analysis and a better understanding of the links between the two areas. Furthermore, a strong interest in having more applications and benchmarks became obvious, and a road map collecting informations on that is planned.
The organizers agreed to put together a special issue of Annals of Mathematics and Artificial Intelligence on Argumentation and Belief revision and invite papers on the use of methods and tools from belief change theory in argumentation theory, on the use of methods and tools from argumentation theory in belief change theory, on systems and frameworks that contain elements from both belief change and argumentation, and on practical applications of argumentation or belief revision in multi-agent systems or knowledge representation.
- Edmond Awad (Masdar Institute - Abu Dhabi, AE) [dblp]
- Florence Bannay-Dupin de St-Cyr (Paul Sabatier University - Toulouse, FR) [dblp]
- Pietro Baroni (University of Brescia, IT) [dblp]
- Ringo Baumann (Universität Leipzig, DE) [dblp]
- Pierre Bisquert (Paul Sabatier University - Toulouse, FR) [dblp]
- Alexander Bochman (Holon Institute of Technology, IL) [dblp]
- Martin Caminada (University of Aberdeen, GB) [dblp]
- Célia da Costa Pereira (University of Nice, FR) [dblp]
- Jürgen Dix (TU Clausthal, DE) [dblp]
- André Fuhrmann (Universität Frankfurt, DE) [dblp]
- Dov M. Gabbay (King's College London, GB) [dblp]
- Aditya K. Ghose (University of Wollongong, AU) [dblp]
- Massimiliano Giacomin (University of Brescia, IT) [dblp]
- Sven Ove Hansson (KTH Royal Institute of Technology, SE) [dblp]
- Andreas Herzig (Paul Sabatier University - Toulouse, FR) [dblp]
- Anthony Hunter (University College London, GB) [dblp]
- Gabriele Kern-Isberner (TU Dortmund, DE) [dblp]
- Sebastien Konieczny (CNRS - Lens, FR) [dblp]
- Patrick Krümpelmann (TU Dortmund, DE) [dblp]
- Daniel Lehmann (The Hebrew University of Jerusalem, IL) [dblp]
- Beishui Liao (Zhejiang University, CN) [dblp]
- Pierre Marquis (CNRS - Lens, FR) [dblp]
- Maria Vanina Martinez (University of Oxford, GB) [dblp]
- Peter Novák (TU Delft, NL) [dblp]
- Nir Oren (University of Aberdeen, GB) [dblp]
- Odile Papini (University of Marseille, FR) [dblp]
- Matei Popovici (TU Clausthal, DE) [dblp]
- Mauricio Reis (University of Madeira - Funchal, PT) [dblp]
- Tjitze Rienstra (University of Luxembourg, LU) [dblp]
- Ken Satoh (National Institute of Informatics - Tokyo, JP) [dblp]
- Jan Sefranek (Comenius University in Bratislava, SK) [dblp]
- Gerardo I. Simari (University of Oxford, GB) [dblp]
- Guillermo R. Simari (National University of the South - Bahía Blanca, AR) [dblp]
- Andrea Tettamanzi (University of Nice, FR) [dblp]
- Matthias Thimm (Universität Koblenz-Landau, DE) [dblp]
- Serena Villata (INRIA Sophia Antipolis - Méditerranée, FR) [dblp]
- Emil Weydert (University of Luxembourg, LU) [dblp]
- Stefan Woltran (TU Wien, AT) [dblp]
- Zhiqiang Zhuang (Griffith University - Brisbane, AU) [dblp]
Klassifikation
- artificial intelligence / robotics
- semantics / formal methods
- verification / logic
Schlagworte
- argumentation
- belief revision
- multiagent systems