Dagstuhl Seminar 17352
Analysis and Synthesis of Floating-point Programs
( Aug 27 – Aug 30, 2017 )
Permalink
Organizers
- Eva Darulova (MPI-SWS - Saarbrücken, DE)
- Alastair F. Donaldson (Imperial College London, GB)
- Zvonimir Rakamaric (University of Utah - Salt Lake City, US)
- Cindy Rubio-Gonzalez (University of California - Davis, US)
Contact
- Michael Gerke (for scientific matters)
- Annette Beyer (for administrative matters)
Impacts
- Combining Tools for Optimization and Analysis of Floating-Point Computations : article in International Symposium on Formal Methods : FM 2018 : LNCS 10951 - Becker, Heiko; Panchekha, Pavel; Darulova, Eva; Tatlock, Zachary - Berlin : Springer, 2018. - pp 355-363.
- Icing : Supporting Fast-Math Style Optimizations in a Verified Compiler : article in LNCS 11561 - Becker, Heiko; Darulova, Eva; Myreen, Magnus O.; Tatlock, Zachary - Berlin : Springer, 2019. - pp. 155-173 - (Lecture notes in computer science ; 11561 article).
Schedule
Nowadays, floating-point computation is supported on a wide range of computing platforms, ranging from smartphones and embedded devices all the way to supercomputers. However, writing correct, yet high-performance and energy-efficient flfloating-point software is a challenging and complex task that often requires high levels of expertise. This is in large part due to the non-intuitive nature of floatingpoint arithmetic: roundoff error means that many algebraic laws of the real numbers do not hold in the flfloating-point setting. There is a dire need to empower developers with automated techniques and tools for analysis and synthesis of floating-point programs.
Research related to floating-point computation spans a multitude of areas of computer science, ranging from hardware design and architecture, all the way to high-performance computing, machine learning, and software analysis and verification. The focus of this Dagstuhl Seminar is to connect researchers from these areas, with the goals of fostering collaboration, and brainstorming new theoretical advances, practical techniques and tools to improve the reliability and performance of floating-point computation.
There has been a lot of recent interest in floating-point optimization and reasoning, including expression rewriting and instruction generation to improve precision; abstract interpretation, automated theorem proving, SMT solving and global optimization for rigorous reasoning; and integration of lower-precision computations into floating-point programs to conserve energy while achieving acceptably stable results (important in high-performance computing and machine learning).
While these above approaches and techniques have certainly advanced the current state-of-the-art, they still suffer from various drawbacks, such as limited automation, precision, or scalability, which prevents them from being transferred into practical solutions. Much more work is needed, including breakthroughs that include completely novel techniques as well as novel combinations of existing methods.
The goal of this Dagstuhl Seminar is for the attendees to present their current research in the area of analysis, synthesis, and implementation of floating-point computations to their colleagues, many of whom will come from other areas of computer science. This will foster subsequent discussions and brainstorming sessions, with the ultimate goal in mind for ideas to ow from one area into another, for new collaborations to be established, and to lay groundwork for novel research directions.
Press Reviews
- Floating-Point Workshop
Blog post by George Constantinides, posted in his Thinking blog September 1st, 2017.
This report documents the program and the outcomes of Dagstuhl Seminar 17352 “Analysis and Synthesis of Floating-point Programs”.
Floating-point numbers provide a finite approximation of real numbers that attempts to strike a fine balance between range, precision, and efficiency of performing computations. Nowadays, performing floating-point computations is supported on a wide range of computing platforms, and are employed in many widely-used and important software systems, such as high-performance computing simulations, banking, stock exchange, self-driving cars, and machine learning.
However, writing correct, and yet high-performance and energy-efficient, floating-point code is challenging. For example, floating-point operations are often non-associative (contrary to their real mathematical equivalents), which creates problems when an ordering of operations is modified by either a compiler or due to nondeterministic interleavings of concurrent executions. Furthermore, the underlying floating-point hardware is often heterogeneous, hence different results may be computed across different platforms or even across components of the same heterogeneous platform. Given the underlying complexity associated with writing floating-point code, it is not surprising that there have been numerous software bugs attributed to incorrectly implemented floating-point computations.
Research related to floating-point computation spans a multitude of areas of computer science, ranging from hardware design and architecture, all the way to high-performance computing, machine learning, and software analysis and verification. The objective of this seminar was thus to bring together researchers from several of these areas, which have either traditionally been considered as non-overlapping, or which have arguably enjoyed insufficient interaction despite a clear overlap of interests. The goal in mind here was to provide opportunities to brainstorm new theoretical advances and practical techniques and tools for making floating-point computations performant and correct, and to help foster long term collaborations.
The seminar involved brief presentations from most participants, interspersed with a lot of informal technical discussion, in addition to four breakout sessions based on common themes that arose during informal discussion. In addition, a joint panel session was held between this seminar and the concurrently running “Machine Learning and Formal Methods” seminar. This report presents the collection of abstracts associated with the participant presentations, notes summarising each discussion session, and a transcript of the panel session. We hope that the report will provide a useful resource for researchers today who are interested in understanding the state-of-the-art and open problems related to analysing and synthesising floating-point programs, and as a historical resource helping to clarify the status of this field in 2017.
- Erika Abraham (RWTH Aachen, DE) [dblp]
- George A. Constantinides (Imperial College London, GB) [dblp]
- Nasrine Damouche (University of Perpignan, FR) [dblp]
- Eva Darulova (MPI-SWS - Saarbrücken, DE) [dblp]
- James W. Demmel (University of California - Berkeley, US) [dblp]
- Anthony Di Franco (University of California - Davis, US) [dblp]
- Alastair F. Donaldson (Imperial College London, GB) [dblp]
- Theo Drane (Cadence Design Systems - Bracknell, GB) [dblp]
- Sam Elliott (Imagination Technologies - Kings Langley, GB) [dblp]
- Ganesh L. Gopalakrishnan (University of Utah - Salt Lake City, US) [dblp]
- Hui Guo (University of California - Davis, US) [dblp]
- Jeffrey K. Hollingsworth (University of Maryland - College Park, US) [dblp]
- Miriam Leeser (Northeastern University - Boston, US) [dblp]
- Daniel Liew (Imperial College London, GB) [dblp]
- Piotr Luszczek (University of Tennessee - Knoxville, US) [dblp]
- Victor Magron (VERIMAG - Grenoble, FR) [dblp]
- Matthieu Martel (University of Perpignan, FR) [dblp]
- Guillaume Melquiond (INRIA - Gif-sur-Yvette, FR) [dblp]
- David Monniaux (Université Grenoble Alpes - Saint-Martin-d'Hères, FR) [dblp]
- Magnus Myreen (Chalmers University of Technology - Göteborg, SE) [dblp]
- Santosh Nagarakatte (Rutgers University - Piscataway, US) [dblp]
- Pavel Panchekha (University of Washington - Seattle, US) [dblp]
- Sylvie Putot (Ecole Polytechnique - Palaiseau, FR) [dblp]
- Zvonimir Rakamaric (University of Utah - Salt Lake City, US) [dblp]
- Nathalie Revol (ENS - Lyon, FR) [dblp]
- Cindy Rubio-Gonzalez (University of California - Davis, US) [dblp]
- Daniel Schemmel (RWTH Aachen, DE) [dblp]
- Oscar Soria Dustmann (RWTH Aachen, DE) [dblp]
- Zachary Tatlock (University of Washington - Seattle, US) [dblp]
- Michela Taufer (University of Delaware - Newark, US) [dblp]
- Laura Titolo (National Institute of Aerospace - Hampton, US) [dblp]
- Thomas Wahl (Northeastern University - Boston, US) [dblp]
Classification
- programming languages / compiler
- semantics / formal methods
- verification / logic
Keywords
- Floating-point arithmetic
- Program optimization
- Rigorous compilation
- Precision allocation
- Energy-efficient Computing