TOP
Search the Dagstuhl Website
Looking for information on the websites of the individual seminars? - Then please:
Not found what you are looking for? - Some of our services have separate websites, each with its own search option. Please check the following list:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminars
Within this website:
External resources:
  • DOOR (for registering your stay at Dagstuhl)
  • DOSA (for proposing future Dagstuhl Seminars or Dagstuhl Perspectives Workshops)
Publishing
Within this website:
External resources:
dblp
Within this website:
External resources:
  • the dblp Computer Science Bibliography


Dagstuhl Seminar 25242

Testing Program Analyzers and Verifiers

( Jun 09 – Jun 12, 2025 )

Permalink
Please use the following short url to reference this page: https://www.dagstuhl.de/25242

Organizers

Contact

Motivation

In an era where software pervades every facet of modern life, ensuring the correctness of software systems is of paramount importance. Program analyzers and verifiers are integral ingredients of this endeavor. They provide developers with essential tools to identify and prevent potential software faults before they impact production systems and end users. However, just like all software, analyzers and verifiers are not free from faults. Faults in analysis and verification tools can potentially undermine the entire software ecosystem, leading to missed vulnerabilities, wasted development efforts, and more. The goal of this Dagstuhl Seminar is to (1) identify the challenges associated with the problem of ensuring the reliability of program analyzers and verifiers, (2) discuss potential ways to address these challenges, and (3) connect both practitioners as well as researchers working in this domain. In particular, the seminar aims to bring together experts in program analysis, verification, automated testing, and formal methods. The discussion will focus on three themes:

  • Severity of faults within program analysis and verification tools: This involves a detailed examination of how different types of faults can impact various user groups (e.g., end users, software developers) by taking into consideration the context and the intended users of the analysis. For example, we plan to elucidate essential properties required in analyses (e.g., soundness for safety-critical software), while discussing which properties may be less critical.
  • Automated generation of test inputs for analyzers and verifiers: We aim to thoroughly discuss the challenge of test input generation for finding faults in program analysis and verification, and explore whether existing program generators used in other contexts (e.g., compiler testing) can be potentially applied to validate analyzers/verifiers. For example, program analysis and verification tools are routinely used to detect a huge variety of semantic errors (e.g., buffer overflows, type errors, integer overflows, and many more). A key technical challenge with this is the generation of programs that exhibit interesting semantic errors that are supposedly to be caught by the analyzer/verifier under test.
  • Test oracles to validate program analyzers and verifiers: Automatically testing an analyzer or verifier requires a test oracle to determine whether it functions as expected. Given that the majority of program analyzers/verifiers lack a specification and are not standardized, this introduces a significant challenge in determining whether their behavior is correct. The goal is to discuss potential test oracles for different types of faults within program analyzers and verifiers.
  • Copyright Maria Christakis, Alastair F. Donaldson, John Regehr, and Thodoris Sotiropoulos

Classification
  • Programming Languages
  • Software Engineering

Keywords
  • testing
  • formal methods
  • verification
  • program analysis
  • static analysis