TOP
Search the Dagstuhl Website
Looking for information on the websites of the individual seminars? - Then please:
Not found what you are looking for? - Some of our services have separate websites, each with its own search option. Please check the following list:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminars
Within this website:
External resources:
  • DOOR (for registering your stay at Dagstuhl)
  • DOSA (for proposing future Dagstuhl Seminars or Dagstuhl Perspectives Workshops)
Publishing
Within this website:
External resources:
dblp
Within this website:
External resources:
  • the dblp Computer Science Bibliography


Dagstuhl Seminar 25051

Trust and Accountability in Knowledge Graph-Based AI for Self Determination

( Jan 26 – Jan 31, 2025 )

Permalink
Please use the following short url to reference this page: https://www.dagstuhl.de/25051

Organizers

Contact

Dagstuhl Seminar Wiki

Shared Documents

Schedule
  • Upload (Use personal credentials as created in DOOR to log in)

Motivation

In just one minute in April 2022, there were 5,900,000 searches on Google, 1,700,000 pieces of content shared on Facebook, 1,000,000 hours streamed and 347,200 tweets shared on Twitter.1 This content and data is linked to a plethora of Artificial Intelligence (AI) services which have increasingly been based on Knowledge Graphs (KGs), i.e., machine-readable data and schema representations based on a web stack of standards. The term ‘Knowledge Graph’ was first introduced by Google in 2012 and is strongly linked to the work of the Semantic Web community which first began in around 2001 based on the seminal paper by Berners-Lee et al. 2001.2 The main types of areas covered by AI services include, for example, content recommendation, user input prediction, and large-scale search and discovery and form the basis for the business models of companies such as Google, Netflix, Spotify, and Facebook.

Over a number of years, there have been growing concerns on how personal data can be abused and thus, how AI services impinge on citizen rights. For example, the over centralisation of data and linked abuses led Sir Tim Berners-Lee to call the Web ‘anti-human’ in an interview in 2018 3 and since 2016, hundreds of US Immigration and Customs Enforcement employees have faced investigations into abuse of confidential law enforcement databases including stalking and harassment to passing data to criminals. 4 The subject of proposed legislation today is ensuring that digital platforms, including AI platforms, provide societal benefit. Within Europe the proposed EU AI Act aims to support safe AI that respects fundamental human rights. The regulation sets what technologists need to do. This Dagstuhl Seminar, which aims to better understand the technical landscape, is structured around three pillar research topics - trust, accountability, and self-determination - that represent the desired goals, and four foundational research topics - Machine-readable Norms and Policies, Decentralised KG Management, Neuro-Symbolic AI, and Decentralised Applications - that constitute the necessary technical foundations to achieve the goals. Regulation sets what technologists need to do, leading to questions concerning: How can the output of AI systems be trusted? What is needed to ensure that the data fuelling and the inner workings of these artefacts are transparent? How can AI be made accountable for its decision-making? The seminar will explore these questions from an KG-based AI viewpoint. In particular, the seminar will comprise three pillars outlined above - trust, accountability, and self-determination - which will form the structure for the first part of the event. We envisage that the technical mechanisms mentioned earlier will also be covered as well: Machine-readable Norms and Policies, Decentralised KG Management, Neuro-Symbolic AI, and Decentralised Applications. In terms of the event format, the seminar will be guided by short introductory talks along the foundational research topics, use cases based on previous projects and experience, as well as knowledge sharing and ideation.

References

  1. https://www.statista.com/statistics/195140/new-user-generated-content-uploaded-by-users-per-minute.
  2. Berners-Lee, T., Hendler, J. and Lassila, O., 2001. The semantic web. Scientific American, 284(5), pp. 34-43.
  3. "I Was Devastated": Tim Berners-Lee, the Man Who Created the World Wide Web, Has Some Regrets | Vanity .
  4. https://www.wired.com/story/ice-agent-database-abuse-records/
Copyright John Domingue, Luis-Daniel Ibáñez, Sabrina Kirrane, and Maria-Esther Vidal

Participants

Please log in to DOOR to see more details.

  • Sören Auer
  • Piero Andrea Bonatti
  • Irene Celino
  • Andrea Cimmino
  • Michael Cochez
  • John Domingue
  • Michel Dumontier
  • Javier David Fernández-García
  • Nicoletta Fornara
  • Irini Fundulaki
  • Sandra Geisler
  • Anna Lisa Gentile
  • José Manuel Gómez-Pérez
  • Guido Governatori
  • Paul Groth
  • Peter Haase
  • Andreas Harth
  • Olaf Hartig
  • James A. Hendler
  • Aidan Hogan
  • Katja Hose
  • Luis-Daniel Ibáñez
  • Ryutaro Ichise
  • Ernesto Jiménez-Ruiz
  • Timotheus Kampik
  • Sabrina Kirrane
  • George Konstantinidis
  • Manolis Koubarakis
  • Luis C. Lamb
  • Deborah L. McGuinness
  • Julian Padget
  • Monica Palmirani
  • Harshvardhan J. Pandit
  • Heiko Paulheim
  • Axel Polleres
  • Philipp D. Rohde
  • Daniel Schwabe
  • Oshani Seneviratne
  • Chang Sun
  • Aisling Third
  • Raphael Troncy
  • Ruben Verborgh
  • Maria-Esther Vidal
  • Jesse Wright
  • Sonja Zillner

Classification
  • Artificial Intelligence
  • Computers and Society
  • Databases

Keywords
  • Trust
  • Transparency
  • Accountability
  • Knowledge Graphs
  • Web Data