TOP
Suche auf der Schloss Dagstuhl Webseite
Sie suchen nach Informationen auf den Webseiten der einzelnen Seminare? - Dann:
Nicht fündig geworden? - Einige unserer Dienste laufen auf separaten Webseiten mit jeweils eigener Suche. Bitte beachten Sie folgende Liste:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminare
Innerhalb dieser Seite:
Externe Seiten:
  • DOOR (zum Registrieren eines Dagstuhl Aufenthaltes)
  • DOSA (zum Beantragen künftiger Dagstuhl Seminare oder Dagstuhl Perspektiven Workshops)
Publishing
Innerhalb dieser Seite:
Externe Seiten:
dblp
Innerhalb dieser Seite:
Externe Seiten:
  • die Informatik-Bibliographiedatenbank dblp


Dagstuhl-Seminar 25242

Testing Program Analyzers and Verifiers

( 09. Jun – 12. Jun, 2025 )

Permalink
Bitte benutzen Sie folgende Kurz-Url zum Verlinken dieser Seite: https://www.dagstuhl.de/25242

Organisatoren

Kontakt

Motivation

In an era where software pervades every facet of modern life, ensuring the correctness of software systems is of paramount importance. Program analyzers and verifiers are integral ingredients of this endeavor. They provide developers with essential tools to identify and prevent potential software faults before they impact production systems and end users. However, just like all software, analyzers and verifiers are not free from faults. Faults in analysis and verification tools can potentially undermine the entire software ecosystem, leading to missed vulnerabilities, wasted development efforts, and more. The goal of this Dagstuhl Seminar is to (1) identify the challenges associated with the problem of ensuring the reliability of program analyzers and verifiers, (2) discuss potential ways to address these challenges, and (3) connect both practitioners as well as researchers working in this domain. In particular, the seminar aims to bring together experts in program analysis, verification, automated testing, and formal methods. The discussion will focus on three themes:

  • Severity of faults within program analysis and verification tools: This involves a detailed examination of how different types of faults can impact various user groups (e.g., end users, software developers) by taking into consideration the context and the intended users of the analysis. For example, we plan to elucidate essential properties required in analyses (e.g., soundness for safety-critical software), while discussing which properties may be less critical.
  • Automated generation of test inputs for analyzers and verifiers: We aim to thoroughly discuss the challenge of test input generation for finding faults in program analysis and verification, and explore whether existing program generators used in other contexts (e.g., compiler testing) can be potentially applied to validate analyzers/verifiers. For example, program analysis and verification tools are routinely used to detect a huge variety of semantic errors (e.g., buffer overflows, type errors, integer overflows, and many more). A key technical challenge with this is the generation of programs that exhibit interesting semantic errors that are supposedly to be caught by the analyzer/verifier under test.
  • Test oracles to validate program analyzers and verifiers: Automatically testing an analyzer or verifier requires a test oracle to determine whether it functions as expected. Given that the majority of program analyzers/verifiers lack a specification and are not standardized, this introduces a significant challenge in determining whether their behavior is correct. The goal is to discuss potential test oracles for different types of faults within program analyzers and verifiers.
  • Copyright Maria Christakis, Alastair F. Donaldson, John Regehr, and Thodoris Sotiropoulos

Klassifikation
  • Programming Languages
  • Software Engineering

Schlagworte
  • testing
  • formal methods
  • verification
  • program analysis
  • static analysis