Dagstuhl Seminar 25242
Testing Program Analyzers and Verifiers
( Jun 09 – Jun 12, 2025 )
Permalink
Organizers
- Maria Christakis (TU Wien, AT)
- Alastair F. Donaldson (Imperial College London, GB)
- John Regehr (University of Utah - Salt Lake City, US)
- Thodoris Sotiropoulos (ETH Zürich, CH)
Contact
- Michael Gerke (for scientific matters)
- Simone Schilke (for administrative matters)
In an era where software pervades every facet of modern life, ensuring the correctness of software systems is of paramount importance. Program analyzers and verifiers are integral ingredients of this endeavor. They provide developers with essential tools to identify and prevent potential software faults before they impact production systems and end users. However, just like all software, analyzers and verifiers are not free from faults. Faults in analysis and verification tools can potentially undermine the entire software ecosystem, leading to missed vulnerabilities, wasted development efforts, and more. The goal of this Dagstuhl Seminar is to (1) identify the challenges associated with the problem of ensuring the reliability of program analyzers and verifiers, (2) discuss potential ways to address these challenges, and (3) connect both practitioners as well as researchers working in this domain. In particular, the seminar aims to bring together experts in program analysis, verification, automated testing, and formal methods. The discussion will focus on three themes:
- Severity of faults within program analysis and verification tools: This involves a detailed examination of how different types of faults can impact various user groups (e.g., end users, software developers) by taking into consideration the context and the intended users of the analysis. For example, we plan to elucidate essential properties required in analyses (e.g., soundness for safety-critical software), while discussing which properties may be less critical.
- Automated generation of test inputs for analyzers and verifiers: We aim to thoroughly discuss the challenge of test input generation for finding faults in program analysis and verification, and explore whether existing program generators used in other contexts (e.g., compiler testing) can be potentially applied to validate analyzers/verifiers. For example, program analysis and verification tools are routinely used to detect a huge variety of semantic errors (e.g., buffer overflows, type errors, integer overflows, and many more). A key technical challenge with this is the generation of programs that exhibit interesting semantic errors that are supposedly to be caught by the analyzer/verifier under test.
- Test oracles to validate program analyzers and verifiers: Automatically testing an analyzer or verifier requires a test oracle to determine whether it functions as expected. Given that the majority of program analyzers/verifiers lack a specification and are not standardized, this introduces a significant challenge in determining whether their behavior is correct. The goal is to discuss potential test oracles for different types of faults within program analyzers and verifiers.
Classification
- Programming Languages
- Software Engineering
Keywords
- testing
- formal methods
- verification
- program analysis
- static analysis