Dagstuhl Seminar 22172
Technologies to Support Critical Thinking in an Age of Misinformation
( Apr 24 – Apr 27, 2022 )
Permalink
Organizers
- Andreas Dengel (DFKI - Kaiserslautern, DE)
- Laurence Devillers (CNRS - Orsay, FR & Sorbonne University - Paris, FR)
- Tilman Dingler (The University of Melbourne, AU)
- Koichi Kise (Osaka Prefecture University, JP)
- Benjamin Tag (The University of Melbourne, AU)
Contact
- Andreas Dolzmann (for scientific matters)
- Simone Schilke (for administrative matters)
Schedule
The Dagstuhl Seminar on "Technologies to Support Critical Thinking in an Age of Misinformation" ran over a course of three days in April 2022. Each day focused on one specific aspect of the problem of Misinformation and the role technologies play in its worsening and mitigation.
Day 1 put the overall seminar goals and an introduction to the topic into its focus. All participants introduced themselves and gave a concrete example of an important challenge they have identified. The collected challenges were organized and later used as core challenges for group work activities, here Regulations/Policies, Human Factors and Platforms, and Critical Thinking. Over the course of the three days three groups worked on defining challenge statements (Day 1), ideas to solve the issue (Day 2), and concrete Research Questions and Project/Collaboration proposals (Day 3).
The theoretical underpinnings of all group discussions and activities were provided by a series of presentations that were topically organized. Day 1 was centered around how the problem of misinformation has evolved and why misinformation is so successful these days. A historical overview was given by keynote speaker Prof. Emma Spiro, which concluded with the key insights that Networks and platforms shape information flow and that attention dynamics matter. The second keynote talk of the day was given by Prof. Andreas Dengel that put light on the crucial role that images and their power to convey information that is tainted with emotional information, and how technology (e.g, CNNs) can be used to detect those, classify them, and can potentially correct them.
On day 2, the participants zeroed in on the role technology plays. Session 1, started with a keynote by Prof. Niels van Berkel on the role of Artificial Intelligence, and Human-AI interaction. Looking at Technology, Society, and Policy on a larger scale, van Berkel identified the core issue that there exists a lack of literacy on the tech side as well as on the regulatory side, a potential consequence of the lack of qualified tech personnel on regulatory bodies. Keynote 2, by Prof. Laurence Devillers, looked at how technology is used to misinform, deceive, and change public opinion, while proposing solutions, such as Nudging and Boosting techniques, how Human-Ai interaction should be better understood, and how research and industry must work together to mitigate the problem of lacking literacy. In session 2 of the day, Prof. Albrecht Schmid led an open, provocative discussion that served as a brainstorming session for the upcoming group work, mainly focussing on the role of platforms and technology. The third keynote was given by Prof. Stephen Lewandowsky who gave a detailed account of the role of human cognition and the larger impact of misinformation on democratic societies. He identified pressure points and proposes countermeasures that are effective but need to be scaled up through improved and coordinated cross-country regulation. Day 2 ended with a Misinformation Escape Room group activity (demo), led by Dr. Chris Coward, which aims at teaching players the power of misinformation and the complexity of the problem.
Day 3 featured the keynote by Roger Taylor which strongly focussed on the way misinformation is regulated globally, and how regulatory frameworks (Digital Service Act) and effective regulation can help to mitigate the misinformation problem. As an advisor to the UK government, and an expert in responsible AI programs and data ethics, Roger Taylor put a light on pain points in the bureaucracy and the misaligned aims of technology development and research, and politics.
Misinformation and fake news are roaming the Internet in abundance. Characterised as factually incorrect information that is intentionally manipulated to deceive the receiver, it often challenges our ability to tell fake from truth. New technology has eased the distribution of misinformation and enabled governments, organisations, and individuals to influence public opinion. Technology, however, also offers governments and organisations new avenues for detecting and correcting false information.
The very same technologies that are used to collect large amounts of personal information and target users’ cognitive vulnerabilities also offer intelligent solutions to the problem of misinformation. Pattern recognition and Natural Language Processing have made fact-checking applications and spam filters more accurate and reliable. Machine Learning, big data, and context-aware computing systems can be used to detect misinformation in-situ and provide cognitive security. Today, these self-learning systems protect the user and prevent misinformation from finding fertile ground. Researchers and practitioners in Human-Computer Interaction are at the forefront of designing and developing user-facing computing systems. Consequently, we bear special responsibility for working on solutions to mitigate problems arising from misinformation and bias-enforcing interfaces.
This Dagstuhl Seminar aims to bring together designers, developers, practitioners, and thinkers across disciplines to discuss and devise solutions in the form of technologies and applications that instil and nurture critical thinking in their users. With a focus on misinformation, we will explore users’ vulnerabilities in order to discuss and design solutions to keep users safe from manipulation, i.e., provide cognitive security. Over three days, an esteemed selection of about 30 participants will engage with the problem of misinformation and re-think computing systems to re-think the incentive structures and mechanisms of social computing systems with particular regard to news media and how people encounter and process misinformation. By looking at systems, users, and applications from an interdisciplinary perspective, we aim to produce a research agenda and blueprints for systems that provide transparency, contribute to advancing technology and media literacy, build critical thinking skills, and depolarise by design.
- Chris Coward (University of Washington - Seattle, US)
- Henriette Cramer (Spotify - San Francisco, US) [dblp]
- Andreas Dengel (DFKI - Kaiserslautern, DE) [dblp]
- Tilman Dingler (The University of Melbourne, AU) [dblp]
- David Eccles (The University of Melbourne, AU)
- Nabeel Gillani (MIT - Cambridge, US)
- Koichi Kise (Osaka Prefecture University, JP) [dblp]
- Dimitri Molerov (Universität Mainz, DE)
- Albrecht Schmidt (LMU München, DE) [dblp]
- Gautam Kishore Shahi (Universität Duisburg-Essen, DE) [dblp]
- Benjamin Tag (The University of Melbourne, AU) [dblp]
- Roger Taylor (Open Data Partners - London, GB) [dblp]
- Niels van Berkel (Aalborg University, DK) [dblp]
- Andrew Vargo (Osaka Prefecture University - Sakai, JP) [dblp]
- Eva Wolfangel (Stuttgart, DE)
- Susanne Boll (Universität Oldenburg, DE) [dblp]
- Nattapat Boonprakong (The University of Melbourne, AU)
- Laurence Devillers (CNRS - Orsay, FR & Sorbonne University - Paris, FR) [dblp]
- Dilrukshi Gamage (Tokyo Institute of Technology, JP)
- Stepahn Lewandowsky (University of Bristol, GB)
- Philipp Lorenz-Spreen (MPI for Human Development- Berlin, DE)
- Emma Spiro (University of Washington - Seattle, US) [dblp]
- Junichi Tsujii (AIRC - Tokyo, JP) [dblp]
Classification
- Human-Computer Interaction
- Other Computer Science
- Social and Information Networks
Keywords
- Cognitive Security
- Misinformation
- Bias Computing