TOP
Search the Dagstuhl Website
Looking for information on the websites of the individual seminars? - Then please:
Not found what you are looking for? - Some of our services have separate websites, each with its own search option. Please check the following list:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminars
Within this website:
External resources:
  • DOOR (for registering your stay at Dagstuhl)
  • DOSA (for proposing future Dagstuhl Seminars or Dagstuhl Perspectives Workshops)
Publishing
Within this website:
External resources:
dblp
Within this website:
External resources:
  • the dblp Computer Science Bibliography


Dagstuhl Seminar 26051

User-Aligned Assessment of AI Systems

( Jan 25 – Jan 30, 2026 )

Permalink
Please use the following short url to reference this page: https://www.dagstuhl.de/26051

Organizers

Contact

Motivation

This Dagstuhl Seminar addresses research gaps in the continual assessment of AI systems amid post-deployment changes in requirements, user-specific objectives, deployment environments, and the AI systems themselves.

With non-expert users increasingly encountering AI systems, the operational domain of AI has expanded from single purpose to more generalized applications. This shift raises broad questions about post-deployment assessment of these systems' limits and capabilities, especially as they tackle user-specific tasks and environments not envisioned during design. The seminar will focus on processes and technical approaches for conceptualizing, managing, and enforcing continual, user-driven assessment of AI systems. Emphasis will be on systems that adapt and learn with evolving user requirements and deployment environments.

The seminar will bring together ideas across two highly active fields of research: AI and formal methods. Some of the key research questions motivating this seminar include:

  • Can we design well-founded algorithmic approaches for identifying the limits and capabilities of a learning-enabled AI system?
  • Can we design systems that enable users to specify their task objectives and fairness and safety considerations, while avoiding the pitfalls of mis-specified or under-specified preferences and objectives?
  • How might we evaluate compliance with such properties efficiently and on-the-fly for new tasks and environments?
  • What may be the use cases where a non-expert user needs to assess the performance, safety, and reliability of an AI system?
  • What ethical considerations must be considered when developing user-aligned assessment methods for AI systems?
  • How might changes in regulatory frameworks affect the development and deployment of user-aligned assessment strategies for embodied AI systems?
  • What kinds of assessment protocols and interfaces should new AI systems provide to support such post-deployment assessment?
  • Finally, how would assessment approaches differ for embodied vs. purely computational AI agents?

These problems extend beyond classical verification and validation, where operational requirements and system specifications are available a priori. In contrast, adaptive AI systems, such as household robots, may change their control paradigms due to system updates and/or learning, as well as due to adaptation to day-to-day changes in the requirements (which can be user-provided) and in the dynamic environments they operate in. All these factors make assessment of adaptive AI systems an emerging and pressing problem that has received relatively little research attention.

We will explore what it means for users to assess the safety and performance of AI systems that continuously evolve and adapt. Discussions will focus on specifying and managing properties from the user's perspective, and methods for verifying, monitoring, and enforcing safety and alignment.

Copyright YooJung Choi, Georgios Fainekos, Siddharth Srivastava, and Hazem Torfah

Classification
  • Artificial Intelligence
  • Logic in Computer Science
  • Machine Learning

Keywords
  • Continuous assessment
  • safe AI
  • requirements
  • cyber-physical-systems
  • robotics