Dagstuhl Seminar 17481
Reliable Computation and Complexity on the Reals
( Nov 26 – Dec 01, 2017 )
Permalink
Organizers
- Norbert T. Müller (Universität Trier, DE)
- Siegfried M. Rump (TU Hamburg-Harburg, DE)
- Klaus Weihrauch (FernUniversität in Hagen, DE)
- Martin Ziegler (KAIST - Daejeon, KR)
Contact
- Andreas Dolzmann (for scientific matters)
- Simone Schilke (for administrative matters)
Dagstuhl Seminar Wiki
- Dagstuhl Seminar Wiki (Use personal credentials as created in DOOR to log in)
Shared Documents
- Dagstuhl Materials Page (Use personal credentials as created in DOOR to log in)
Impacts
- Nemlineáris szimbolikus transformációk optimalizálási feladatokra : article - Csendes, Tibor; Antal, Elvira Dobjanne - SZIGMA, 2017 - (Szigma ; 48. 2017, pp. 33-46).
- On the Existence of the Double Scroll Attractor for the Chua's Circuit with a Smooth Nonlinearity : article in 2018 IEEE International Symposium on Circuits and Systems (ISCAS) - Galias, Zbigniew; Tucker, Warwick - Los Alamitos : IEEE, 2018. - 5 pp..
- Representation Theory of Compact Metric Spaces and Computational Complexity of Continuous Data : article - Kawamura, Akitoshi; Lim, Donghyun; Selivanova, Svetlana; Ziegler, Martin - Cornell University : arXiv.org, 2019. - 39 pp..
- Rigorous integration of smooth vector fields around spiral saddles with an application to the cubic Chua's attractor : article - Galias, Zbigniew; Tucker, Warwick - Amsterdam : Elsevier, 2019. - pp. 2408-2434 - (Journal of Differential Equations ; 266. 2019, 5).
- Rigorous numerical computations for 1D advection equations with variable coefficients : article : 22. pp - Takayasu, Akitoshi; Yoon, Suro; Endo, Yasunori - Cornell University : arXiv.org, 2019.
The intention of this Dagstuhl Seminar is to bring together two groups of researchers working in related areas with quite orthogonal research directions: Reliable Computation and Complexity on the Reals.
Both communities are invited to exchange mutual views, challenges, and state of the art. The goal of the seminar is to unleash their full joint creativity via synergy: to foster interaction, to bridge, and to cross-promote; to tutor next generation scientists on the interface between both fields. They share, in addition to both building and being based on computer science, a solid background in mathematics and particularly advanced/hard real analysis. Moreover they have in common the emphasis on rigorous calculations with guaranteed error bounds of approximations to (not necessarily algebraic) numbers and functions.
Today numerical mathematics offers a large variety of stable and reliable algorithms. Erroneous results are very rare, rare enough not to care about that fact all the time, but yet not rare enough to ignore them. The first research direction “Reliable numerical computation” aims to produce rigorously correct answers to numerical problems. This includes to prove that the problem is solvable and to compute mathematically correct error bounds for the solution. Solely floating-point arithmetic is used to take advantage of the tremendous speed, which naturally poses limits on problems which can be solved. However, in contrast to purely numerical methods, no false answers are possible: Either a true error bound is computed or, a corresponding error message is given.
The second research direction “Complexity on the Reals” is a branch of computability theory studying those functions on the real numbers and related structures which can be effectively computed by machines such as digital computers. This is where real analysis meets (discrete) complexity theory with its notions of runtime and memory/space. It is a branch of TTE (“Type 2 Theory of Effectivity”), which has turned out to be particularly useful for investigating computability on uncountable sets. The success of TTE was based on the idea of joining classical computability theory from theoretical computer science with the mathematical field of topological spaces. For practical purposes and in the spirit of algorithm engineering, the asymptotic results from complexity theory are further refined by considering the efficiency of actual implementations in “exact real arithmetic”.
Most of the participants are expected to give presentations on their current research, however the schedule will ensure ample time for discussions and ad hoc sessions. We also plan to hold brainstorming sessions, to discuss unfinished ideas, to present very recent results, and to reflect the current state in general.
As the two groups of researchers currently are working quite independently, it is very hard to predict concrete results. However, we strongly anticipate that the joint meeting will be as fruitful as the join of computability theory and topology when creating TTE.
The seminar was a meeting between two groups of researchers working in the related areas of reliable computing and of computational complexity on real numbers. While the first area originates in numerical analysis, the second area goes back to the roots of computer science and and computability.
Reliable computations aims to produce correct answers to numerical problems with mathematical rigor. This includes to prove that the problem is solvable and to compute mathematically correct error bounds for the solution. Reliable numerical computations solely use floating-point arithmetic to take advantage of the tremendous speed. Naturally that poses limits on the problems which can be solved, in particular the condition number. However, in contrast to purely numerical methods, no false answers are possible: Either a true error bound is computed or, a corresponding error message is given. There is a history of reliable numerical computations. In the early days, interval arithmetic was often used in a rather naive way. Still the computed results were correct, however, often wide or no bounds at all were computed. Meanwhile it is well understood how to derive effective methods for reliable numerical computations, avoiding wide bounds and pushing the set of solved problems to the limit of that of purely numerical algorithms. A number of interesting and hard mathematical problems have been solved using reliable numerical computations. This includes the famous Kepler conjecture, the existence of mutually distinct solutions to certain partial differential equations, and more. Needless to say that solving a mathematical problem requires rigorous solutions of all particular problems.
Computable analysis is a branch of computability theory studying those functions on the real numbers and related structures which can be computed by machines such as digital computers. The increasing demand for reliable software in scientific computation and engineering requires a sound and broad foundation not only of the analytical/ numerical but also of the computational aspects of real number computation. The branch of computable analysis based on the definition by Grzegorczyk and Lacombe of computable real functions (TTE, "Type 2 Theory of Effectivity") has turned out to be particularly useful for investigating computability on uncountable sets. As a central concept computability appears as a specialization of continuity. Meanwhile computability of numerous analytic problems has been investigated (from basic analysis, functional analysis, ordinary and partial differential equations, analytic functions, measure theory, dynamical systems etc.). All these examples demonstrate the usefulness of the concept.
Once a problem has been shown computable, a natural next question asks for the computational efficiency of such a solution. This is where real analysis meets (discrete) complexity theory with notions of runtime and memory/space: asymptotically with respect to n—>infty for approximating the output up to absolute error 2^{-n}. The famous Bailey-Borwein-Plouffe method for instance permits to compute billions of digits of transcendental within minutes; while Bloch's constant, although proven computable, is still not known up to error 2^{-5}. In fact the distinction between polynomial and exponential time, in the discrete realm gauged for instance by complexity classes P, NP, #P, and PSPACE, re-emerges in the real case: The bit-cost of computing the maximum of an arbitrary fixed smooth (i.e. infinitely often differentiable) polynomial-time computable f : [0; 1]->[0; 1] has been shown to correspond to P-vs-NP; that of Riemann integration to #P; and that of solving an ordinary differential equation to PSPACE. On analytic functions on the other hand these operations map polynomial-time computable instances back to polynomial-time computable results.
For practical purposes and in the spirit of "algorithm engineering", the asymptotic results from complexity theory have to be refined by considering the efficiency of actual implementations. Corresponding software libraries are usually called "exact real arithmetic" (ERA) and implement real numbers in the sense of TTE. ERA implementations exist in many languages, like C, C++ JAVA, Haskell or OCaml. Internally, ERA has to perform operations on infinite data like {0,1}^omega. The user interface, however, hides the details and offers operations and functions on "exact" real numbers. In consequence, users do not need to care about aspects like rounding or truncation errors or the specification of precisions. Instead, they can concentrate on the mathematical part of the problem under consideration. As computable real functions have to be continuous, it is impossible to implement some widely used real functions (like testing on equality). In consequence, ERA cannot simply copy the double precision interface one-to-one, but needs to go its own ways. Additionally, for the reason of efficiency the representations used in TTE have to be carefully revised. The resulting speed is comparable to the use of multiple precision floating point numbers, but now without any need for manual precision control.
- Andrej Bauer (University of Ljubljana, SI) [dblp]
- Henning Behnke (TU Clausthal, DE) [dblp]
- Jens Blanck (Swansea University, GB) [dblp]
- Franz Brauße (Universität Trier, DE) [dblp]
- Florian Bünger (TU Hamburg-Harburg, DE) [dblp]
- Pieter Collins (Maastricht University, NL) [dblp]
- George F. Corliss (Marquette University - Milwaukee, US) [dblp]
- Tibor Csendes (University of Szeged, HU) [dblp]
- Eva Darulova (MPI-SWS - Saarbrücken, DE) [dblp]
- Hubert Glesener (Universität Trier, DE)
- Daniel Graca (University of Algarve, PT) [dblp]
- Stef Graillat (UPMC - Paris, FR) [dblp]
- Peter Hertling (Universität der Bundeswehr - München, DE) [dblp]
- Fabian Immler (TU München, DE) [dblp]
- Luc Jaulin (ENSTA Bretagne - Brest, FR) [dblp]
- Akitoshi Kawamura (Kyushu University, JP) [dblp]
- Sunyoung Kim (Ewha Womans University, KR) [dblp]
- Michal Konecny (Aston University - Birmingham, GB) [dblp]
- Margarita Korovina (Russian Academy of Sc. - Novosibirsk, RU) [dblp]
- Vladik Kreinovich (University of Texas - El Paso, US) [dblp]
- Ivo List (University of Ljubljana, SI)
- Wolfram Luther (Universität Duisburg-Essen, DE) [dblp]
- Fritz Mayer-Lindenberg (TU Hamburg-Harburg, DE) [dblp]
- Atsushi Minamihata (AIST - Tokyo, JP) [dblp]
- Norbert T. Müller (Universität Trier, DE) [dblp]
- Mitsuhiro T. Nakao (Waseda University - Tokyo, JP) [dblp]
- Eike Neumann (Aston University - Birmingham, GB) [dblp]
- Katsuhisa Ozaki (Shibaura Institute of Technology - Saitama, JP) [dblp]
- Sewon Park (KAIST - Daejeon, KR) [dblp]
- Michael Plum (KIT - Karlsruher Institut für Technologie, DE) [dblp]
- Amaury Pouly (MPI-SWS - Saarbrücken, DE) [dblp]
- Robert Rettinger (Fachhochschule Dortmund, DE) [dblp]
- Fabrice Rouiller (INRIA - Paris, FR) [dblp]
- Siegfried M. Rump (TU Hamburg-Harburg, DE) [dblp]
- Michael Sagraloff (MPI für Informatik - Saarbrücken, DE) [dblp]
- Matthias Schröder (Universität der Bundeswehr - München, DE) [dblp]
- Svetlana Selivanova (Sobolev Institute of Mathematics - Novosibirsk, RU) [dblp]
- Florian Steinberg (TU Darmstadt, DE) [dblp]
- Akitoshi Takayasu (University of Tsukuba, JP) [dblp]
- Holger Thies (University of Tokyo, JP) [dblp]
- Warwick Tucker (Uppsala University, SE) [dblp]
- Joris van der Hoeven (Ecole Polytechnique - Palaiseau, FR) [dblp]
- Klaus Weihrauch (FernUniversität in Hagen, DE) [dblp]
- Nobito Yamamoto (The University of Electro-Communications - Tokyo, JP) [dblp]
- Yuuka Yanagisawa (Waseda University - Tokyo, JP)
- Chee K. Yap (New York University, US) [dblp]
- Martin Ziegler (KAIST - Daejeon, KR) [dblp]
- Paul Zimmermann (INRIA Nancy - Grand Est, FR) [dblp]
Classification
- data structures / algorithms / complexity
- semantics / formal methods
- verification / logic
Keywords
- computable analysis
- reliable numerical computation
- complexity of real computation
- floating-point arithmetic
- exact real arithmetic