Dagstuhl-Seminar 15191
Compositional Verification Methods for Next-Generation Concurrency
( 03. May – 08. May, 2015 )
Permalink
Organisatoren
- Lars Birkedal (Aarhus University, DK)
- Derek Dreyer (MPI-SWS - Saarbrücken, DE)
- Philippa Gardner (Imperial College London, GB)
- Zhong Shao (Yale University, US)
Kontakt
- Andreas Dolzmann (für wissenschaftliche Fragen)
- Susanne Bach-Bernhard (für administrative Fragen)
One of the major open problems confronting software developers today is how to cope with the complexity of building and maintaining large-scale concurrent programs. Such programs are increasingly important as a means of taking advantage of parallelism in modern architectures. However, their correctness frequently depends on subtle invariants governing the use of shared mutable data structures, which must take into account the potential interference between different threads accessing the state simultaneously. Just figuring out how to express such invariants at all has proven to be a very challenging problem; even more challenging is how to reason about such invariants locally, i.e. only within the components of the program that absolutely need to know about them.
Fortunately, we are now at a point where verification research has produced the critical foundations needed to tackle this problem: namely, compositional methods, which exploit the inherently modular structure of realistic concurrent programs in order to decompose verification effort along module boundaries. In the last several years, a variety of different but related compositional methods have been developed for verifying complex concurrent programs, including Hoare type theories, Kripke logical-relations models, and modal separation logics. But much work remains to be done. For example, the vast majority of existing methods for concurrency verification assume a sequentially consistent memory model. However, modern hardware implements---and real programs and compilers exploit---much weaker memory models, which enable more efficient reorderings of basic operations. Existing methods apply primarily to the verification of simple safety and partial correctness properties for imperative programs, whereas most large programs involve higher-order features and must also satisfy liveness properties. Grappling with these kinds of limitations is essential if our verification technology is to be relevant to real-world programs running on modern architectures, and as such it poses exciting new research questions that we as a community are just beginning to explore.
In this seminar, we aim to bring together a wide variety of researchers on concurrency verification, as well as leading experts on concurrent software development in both high- and low-level languages. The goal is to facilitate a stimulating interchange between the theory and practice of concurrent programming, and thereby foster the development of compositional verification methods that can scale to handle the realities of next-generation concurrency.
Among the concrete research challenges we hope to investigate in depth during the seminar are the following:
- What are good ways of reasoning (compositionally) about programs that exploit weak memory semantics?
- How can we adapt existing (or develop new) compositional techniques for reasoning about liveness properties of concurrent programs?
- What are the right concurrency abstractions for higher-order concurrent programming idioms? And what is the best way to verify programs that use those abstractions?
- Can the models and logics developed for reasoning about shared-memory concurrency be adapted to account for message-passing concurrency idioms?
- Several logics have been developed recently for modeling complex shared-state invariants using more sophisticated forms of auxiliary state based on protocols (or transition systems). What is the relationship between these different approaches?
- Toward the goal of compositional verification, what is the most useful way to characterize atomicity of concurrent objects?
- Given the recent progress in compositional methods for verifying complex concurrent programs, can we abstract away common proof patterns that would be suitable for automation?
One of the major open problems confronting software developers today is how to cope with the complexity of reasoning about large-scale concurrent programs. Such programs are increasingly important as a means of taking advantage of parallelism in modern architectures. However, they also frequently depend on subtle invariants governing the use of shared mutable data structures, which must take into account the potential interference between different threads accessing the state simultaneously. Just figuring out how to express such invariants at all has proven to be a very challenging problem; even more challenging is how to support local reasoning about such invariants, i.e., confining the reasoning about them to only the components of the program that absolutely need to know about them.
Fortunately, we are now at a point where verification research has produced the critical foundations needed to tackle this problem: namely, compositional methods, which exploit the inherently modular structure of realistic concurrent programs in order to decompose verification effort along module boundaries. Fascinatingly, a variety of different but related compositional methods have been developed contemporaneously in the last several years:
- Separation logics: Separation logic was developed initially as a generalization of Hoare logic – supporting local, compositional reasoning about sequential, heap-manipulating programs – and much of the early work on separation logic has been successfully incorporated into automated verification tools like Smallfoot [2], SLAyer [3], Abductor [6], etc., scaling to handle millions of lines of code. Recently, there have been a series of breakthroughs in adapting separation logic to handle concurrent programs as well. Concurrent separation logic [17] provides course-grained local reasoning about concurrent programs; combining this local reasoning with rely-guarantee reasoning [26] provides fine-grained concurrent reasoning; intertwining abstraction with local reasoning enables a client to reason about the use of a set module [8] without having to think about the underlying implementation using lists or concurrent B-trees; and, very recently, all this has been extended to account for higher-order programs as well [21].
- Kripke models: There is a long line of work on the use of semantic models like Kripke logical relations [1, 9] (and more recently bisimulations [19, 20]) for proving observational equivalence of programs that manipulate local state. Observational equivalence is useful not only for establishing correctness of program transformations (e.g., in compiler certification) but also as a verification method in its own right (e.g., one can prove that a complex but efficient implementation of an ADT is equivalent to a simple but inefficient reference implementation). However, it is only in the last few years that such models have been generalized to account for the full panoply of features available in modern languages: higher-order state, recursion, abstract types, control operators, and most recently concurrency, resulting in some of the first formal proofs of correctness of sophisticated fine-grained concurrent algorithms in a higher-order setting [1, 9, 23]. These advances have come about thanks to the development of more elaborate Kripke structures for representing invariants on local state.
- Hoare type theory: Dependent type theory provides a very expressive compositional verification system for higher-order functional programs, so expressive that types can characterize full functional correctness. Traditionally, however, dependent type theories were limited to verification of pure programs. Recent work on Hoare type theory (HTT) [15] has shown how to integrate effects into dependent type theory by incorporating Hoare triples as a new primitive type, and prototypes of HTT have been implemented in Coq [7, 16], allowing for imperative programs to be verified mechanically as they are being written. Moreover, first steps of extending HTT with concurrency have recently been taken [14], thus giving hope for a potential future integration of design and verification for higher-order concurrent programs.
All in all, the field of modular concurrency verification is highly active, with groundbreaking new developments in these and other approaches coming out every year. Particularly fascinating is the appearance of deep connections between the different methods. There are striking similarities, for instance, between the advanced Kripke structures used in recent relational models of higher-order state and the semantic models underlying recent concurrent separation logics.
Nevertheless, there are a number of ways in which the advanced models and logics developed thus far are still, to be honest, in their infancy. Most of these approaches, for example, have only been applied to the verification of small, self-contained ADTs and have not yet been scaled up to verify large-scale modular concurrent programs. Moreover, even the most state-of-the-art compositional methods do not yet account for a number of the essential complexities of concurrent programming as it is practiced today, including:
- Weak memory models: The vast majority of state-of-the-art compositional verification methods are proved sound with respect to an operational semantics that assumes a sequentially consistent memory model. However, modern hardware implements weak memory models that allow for many more reorderings of basic operations. Thus there is a clear gap between the verification theory and practice that needs to be filled (for efficiency reasons we, of course, do not want to force programmers/compilers to insert enough memory fence operations to make the hardware behave sequentially consistent). This problem has been known for the last decade, but it is only in the last year or two that formal descriptions of the behavior of programming languages with weak memory models have been developed. Given this foundation, we should now be able to make progress on extending compositional verification methods to weak memory models.
- Higher-order concurrency: Higher-order functional abstraction is an indispensable feature of most modern, high-level programming languages. It is also central to a variety of concurrent programming idioms, both established and nascent: work stealing [4], Concurrent ML-style events [18], concurrent iterators [13], parallel evaluation strategies [22], STM [11], reagents [24], and more. Yet, only a few existing logics have been proposed that even attempt to account for higher-order concurrency [21, 14, 12], and these logics are just first steps – for example, they do not presently account for sophisticated “fine-grained” concurrent ADTs. Verification of higher-order concurrent programs remains a largely open problem.
- Generalizing linearizability: Sophisticated concurrent data structures often use finegrained synchronization to maximize the possibilities for parallel access. The classical correctness criterion for such fine-grained data structures is linearizability, which ensures that every operation has a linearization point at which it appears (to clients) to atomically take effect. However, existing logics do not provide a way to exploit linearizability directly in client-side reasoning, and moreover the notion does not scale naturally to account for operations (such as higher-order iterators) whose behavior is not semantically atomic. Recently, researchers have started to investigate alternative approaches, based on contextual refinement [10, 23]. And methods for reasoning about operations with multiple linearizability points are also being developed.
- Liveness properties: Synchronization of concurrent data structures can also affect the progress of the execution of the client threads. Various progress properties have been proposed for concurrent objects. The most important ones are wait-freedom, lock-freedom and obstruction-freedom for non-blocking implementations, and starvation-freedom and deadlock-freedom for lock-based implementations. These properties describe conditions under which method calls are guaranteed to successfully complete in an execution. Traditional definitions (which are quite informal) of these progress properties are difficult to use in modular program verification because they fail to describe how the progress properties affect clients. It is also unclear how existing separation logics, which were primarily designed for proving partial correctness, can be adapted to prove progress properties. Recently, researchers have started to combine quantitative reasoning of resource bounds with separation logics, which offer new possibilities for verifying both safety and liveness properties in a single framework.
Grappling with these kinds of limitations is essential if our verification technology is to be relevant to real-world programs running on modern architectures, and as such it poses exciting new research questions that we as a community are just beginning to explore.
In this seminar, we brought together a wide variety of researchers on concurrency verification, as well as leading experts on concurrent software development in both high- and low-level languages. The goal was to facilitate a stimulating interchange between the theory and practice of concurrent programming, and thereby foster the development of compositional verification methods that can scale to handle the realities of next-generation concurrency.
Among the concrete research challenges investigated in depth during the seminar are the following:
- What are good ways of reasoning about weak memory models? It should be possible to reason about low-level programs that exploit weak memory models (e.g., locks used inside operating systems) but also to reason at higher levels of abstractions for programs that use sufficient locking.
- What is the best way to define a language-level memory model that is nevertheless efficiently implementable on modern hardware. C11 is the state of the art, but it is flawed in various ways, and we heard about a number of different ways of possibly fixing it.
- What is the best way to mechanize full formal verification of concurrent programs, using interactive proof assistants, such as Coq.
- How can we adapt existing and develop new compositional techniques for reasoning about liveness properties of concurrent programs? Can we apply quantitative techniques to reduce the proof of a liveness property to the proof of a stronger safety property? Also, recent work on rely-guarantee-based simulation can prove linearizability of a sophisticated concurrent object by showing the concurrent implementation is a contextual refinement of its sequential specification. We would hope that similar techniques can be used to prove progress properties as well.
- Only recently have researchers begun to propose logics and models for higher-order concurrency [23, 21]. What are the right concurrency abstractions for higher-order concurrent programming idioms as diverse as transactional memory [11], Concurrent ML [18], joins [25], and reagents [24], among others? What is the best way to even specify, let alone verify, programs written in these idioms, and are there unifying principles that would apply to multiple different idioms?
- Most verification work so far has focused on shared-memory concurrency, with little attention paid to message-passing concurrency (except for some recent work on verifying the C# joins library). Can the models and logics developed for the former be carried over usefully to the latter, and what is the connection (if any) with recent work on proof-theoretic accounts of session types [5]? Can session types help to simplify reasoning about some classes of concurrent programs, e.g., those that only involve some forms of message passing and not full shared memory?
- A number of recent Kripke models and separation logics have employed protocols of various forms to describe the invariants about how the semantic state of a concurrent ADT can evolve over time. But different approaches model protocols differently, e.g., using enriched forms of state transition systems vs. partial commutative monoids. Is there a canonical way of representing these protocols formally and thus better understanding the relationship between different proof methods?
- There seem to be tradeoffs between approaches to concurrency verification based on Hoare logic vs. refinement (unary vs. relational reasoning), with the former admitting a wider variety of formal specifications but the latter offering better support for reasoning about atomicity. Consequently, a number of researchers are actively working on trying to combine both styles of reasoning in a unified framework. What is the best way to do this?
- To what extent do we need linearizability to facilitate client-side reasoning? Is it possible in many cases for clients to rely on a much weaker specification ? And which ways are there to formalize looser notions, e.g. where there are multiple linearization points?
- Now that we are finally developing logics and models capable of verifying realistic concurrent algorithms, can we abstract away useful proof patterns and automate them? What is needed in order to integrate support for concurrent invariants into automated verification tools like SLAyer and Abductor?
These different challenges were discussed through talks and discussions by participants, see the list of talk abstracts below.
References
- Amal Ahmed, Derek Dreyer, and Andreas Rossberg. State-dependent representation independence. In POPL, 2009.
- Josh Berdine, Cristiano Calcagno, and Peter W. O’Hearn. Smallfoot: Modular automatic assertion checking with separation logic. In FMCO, 2005.
- Josh Berdine, Byron Cook, and Samin Ishtiaq. Slayer: Memory safety for systems-level code. In CAV, 2011.
- Robert D. Blumofe, Christopher F. Joerg, Bradley C. Kuszmaul, Charles E. Leiserson, Keith H. Randall, and Yuli Zhou. Cilk: An efficient multithreaded runtime system. JPDC, 37(1):55–69, August 1996.
- Luís Caires and Frank Pfenning. Session types as intuitionistic linear propositions. In CONCUR, 2010.
- Cristiano Calcagno, Dino Distefano, Peter O’Hearn, and Hongseok Yang. Compositional shape analysis by means of bi-abduction. Journal of the ACM, 58(6), 2011.
- Adam Chlipala, Gregory Malecha, Greg Morrisett, Avraham Shinnar, and Ryan Wisnesky. Effective interactive proofs for higher-order imperative programs. In ICFP, 2009.
- Thomas Dinsdale-Young, Mike Dodds, Philippa Gardner, Matthew Parkinson, and Viktor Vafeiadis. Concurrent abstract predicates. In ECOOP, June 2010.
- Derek Dreyer, Georg Neis, and Lars Birkedal. The impact of higher-order state and control effects on local relational reasoning. Journal of Functional Programming, 22(4&5):477–528, August 2012.
- I. Filipovic, P.W. O’Hearn, N. Rinetzky, and H. Yang. Abstraction for concurrent objects. In ESOP, 2009.
- Tim Harris, Simon Marlow, Simon Peyton-Jones, and Maurice Herlihy. Composable memory transactions. In PPoPP, 2005.
- Aquinas Hobor, Andrew W. Appel, and Francesco Zappa Nardelli. Oracle semantics for concurrent separation logic. In ESOP, 2008.
- Doug Lea. The java.util.concurrent ConcurrentHashMap.
- Ruy Ley-Wild and Aleksandar Nanevski. Subjective auxiliary state for coarse-grained concurrency. In POPL, 2013.
- Aleksandar Nanevski, Greg Morrisett, and Lars Birkedal. Hoare type theory, polymorphism and separation. JFP, 18(5-6):865–911, 2008.
- Aleksandar Nanevski, Viktor Vafeiadis, and Josh Berdine. Structuring the verification of heap-manipulating programs. In POPL, 2010.
- Peter W. O’Hearn. Resources, concurrency and local reasoning. Theoretical Computer Science, 375(1–3):271–307, May 2007.
- John H. Reppy. Higher-order concurrency. PhD thesis, Cornell University, 1992.
- Davide Sangiorgi, Naoki Kobayashi, and Eijiro Sumii. Environmental bisimulations for higher-order languages. In LICS, 2007.
- Eijiro Sumii. A complete characterization of observational equivalence in polymorphic lambda-calculus with general references. In CSL, 2009.
- Kasper Svendsen, Lars Birkedal, and Matthew Parkinson. Modular reasoning about separation of concurrent data structures. In ESOP, 2013.
- P. W. Trinder, K. Hammond, H.-W. Loidl, and S. L. Peyton Jones. Algorithm + strategy = parallelism. JFP, 8(1):23–60, January 1998.
- A Turon, J Thamsborg, A Ahmed, L Birkedal, and D Dreyer. Logical relations for finegrained concurrency. In POPL, 2013.
- Aaron Turon. Reagents: expressing and composing fine-grained concurrency. In PLDI, 2012.
- Aaron Turon and Claudio Russo. Scalable join patterns. In OOPSLA, 2011.
- Viktor Vafeiadis. Modular fine-grained concurrency verification. PhD thesis, University of Cambridge, 2008.
- Lennart Beringer (Princeton University, US) [dblp]
- Lars Birkedal (Aarhus University, DK) [dblp]
- Andrea Cerone (IMDEA Software - Madrid, ES) [dblp]
- Adam Chlipala (MIT - Cambridge, US) [dblp]
- Karl Crary (Carnegie Mellon University, US) [dblp]
- Pedro Da Rocha Pinto (Imperial College London, GB) [dblp]
- Thomas Dinsdale-Young (Aarhus University, DK) [dblp]
- Mike Dodds (University of York, GB) [dblp]
- Alastair F. Donaldson (Imperial College London, GB) [dblp]
- Cezara Dragoi (IST Austria - Klosterneuburg, AT) [dblp]
- Derek Dreyer (MPI-SWS - Saarbrücken, DE) [dblp]
- Xinyu Feng (Univ. of Science & Technology of China - Suzhou, CN) [dblp]
- Philippa Gardner (Imperial College London, GB) [dblp]
- Alexey Gotsman (IMDEA Software - Madrid, ES) [dblp]
- Aquinas Hobor (National University of Singapore, SG) [dblp]
- Jan Hoffmann (Yale University, US) [dblp]
- Chung-Kil Hur (Seoul National University, KR) [dblp]
- Bart Jacobs (KU Leuven, BE) [dblp]
- Cliff B. Jones (University of Newcastle, GB) [dblp]
- Ralf Jung (MPI-SWS - Saarbrücken, DE) [dblp]
- Eric Koskinen (IBM TJ Watson Research Center - Yorktown Heights, US) [dblp]
- Neel Krishnaswami (University of Birmingham, GB) [dblp]
- Ori Lahav (MPI-SWS - Kaiserslautern, DE) [dblp]
- Hongjin Liang (Univ. of Science & Technology of China - Suzhou, CN) [dblp]
- Paul McKenney (IBM - Beaverton, US) [dblp]
- Maged M. Michael (IBM TJ Watson Research Center - Yorktown Heights, US) [dblp]
- Peter Müller (ETH Zürich, CH) [dblp]
- Aleksandar Nanevski (IMDEA Software - Madrid, ES) [dblp]
- Kyndylan Nienhuis (University of Cambridge, GB) [dblp]
- Scott Owens (University of Kent, GB) [dblp]
- Frank Pfenning (Carnegie Mellon University, US) [dblp]
- Jean Pichon-Pharabod (University of Cambridge, GB) [dblp]
- Francois Pottier (INRIA - Le Chesnay, FR) [dblp]
- Shaz Qadeer (Microsoft Corporation - Redmond, US) [dblp]
- Azalea Raad (Imperial College London, GB) [dblp]
- John Reppy (University of Chicago, US) [dblp]
- Noam Rinetzky (Tel Aviv University, IL) [dblp]
- Claudio Russo (Microsoft Research UK - Cambridge, GB) [dblp]
- Ilya Sergey (IMDEA Software - Madrid, ES) [dblp]
- Peter Sewell (University of Cambridge, GB) [dblp]
- Zhong Shao (Yale University, US) [dblp]
- Kasper Svendsen (Aarhus University, DK) [dblp]
- Joseph Tassarotti (Carnegie Mellon University, US) [dblp]
- Viktor Vafeiadis (MPI-SWS - Kaiserslautern, DE) [dblp]
- Martin Vechev (ETH Zürich, CH) [dblp]
- Nobuko Yoshida (Imperial College London, GB) [dblp]
Klassifikation
- programming languages / compiler
- semantics / formal methods
- verification / logic
Schlagworte
- concurrency
- compositional verification
- mutable state
- program logics
- semantics
- type systems
- weak memory models
- higher-order programming