Dagstuhl-Seminar 03381
New Optimization Algorithms in Physics
( 14. Sep – 19. Sep, 2003 )
Permalink
Organisatoren
- Alexander K. Hartmann (Universität Göttingen, DE)
- Kurt Mehlhorn (MPI für Informatik - Saarbrücken, DE)
- Heiko Rieger (Universität des Saarlandes, DE)
Kontakt
Nearly three years earlier, in December 2001, the Dagstuhl Seminar "Algorithmic Techniques in Physics (II)" took place. Researchers from Computer Science, Mathematics and Physics came together to discuss about algorithmic problems occurring in physics and physical concepts that might be useful in computer science. Bringing together people from three different areas was an experiment that, as all participants agreed in the end, turned out to be a success and, more importantly, should be repeated in the future.
In the field of optimization, the interactions between computer scientists and physicist are strongly growing. This is due to an increasing number of optimization methods applied to problems from physics and, on the other hand, due to concepts and methods from statistical physics which are recently being applied to study optimization problems occurring in theoretical computer science. Still, many algorithms or problems are only known in one field. Hence, computer scientists as well as physicist could profit greatly by participating in this workshop, which aims to spread knowledge to other fields, respectively and to encourage new projects and cooperations.
In recent years, several very efficient exact optimization algorithms have been developed in the computer science community. Examples are maximum flow algorithms, minimum-cost flow techniques, matching methods, which all are graph theoretical approaches or sophisticated branch-and-cut methods, originating in the field of linear optimization. These algorithms have now been applied to problems from physics like for random magnetic materials (random-field systems, spin glasses), in surface physics (solid-on-solid models) and many other disordered systems. The system sizes which can be treated are now much larger than ten years before, allowing the obtain now more reliable and higher significant data.
Also several heuristic approaches have found applications in physics. An example are genetic algorithms, which mimic the optimization of species in an evolutionary process to find very good approximation of the global minima of complicated functions. Genetic algorithms have been recently applied to study systems ranging from the largest sizes, from galaxies to quantum systems. Recently, simple but nevertheless very efficient variants of genetic algorithms have been developed and were presented in the seminar.
A second emphasis of this workshop was the study of optimization problems from theoretical computer science using concepts and methods from statistical physics. Widely studied problems are the satisfiability problem (SAT), where one asks whether for a given boolean formula there exists an assignment of the variables satisfying all constraints, and the vertex-cover problem (VC), where one seeks for the distribution of marks in a graph such that each edge is adjacent to at least on mark. Both SAT and VC exhibit, like many other problems, phase transitions in a suitable parametrized ensemble of random instances. Thus, many methods invented in statistical physics to study phase transitions can be applied to problems from theoretical computer science, leading to results which could not be found before using traditional methods from mathematics.
The range of problems treatable with exact and heuristic optimization algorithms and the number of algorithms applicable to problems from physics is much larger than it has been realized so far. Hence, the physics community will profit a lot from learning more about recent algorithmic developments. On the other hand, computer scientists, who are looking for real-world applications of sophisticated algorithms, will benefit strongly by finding out about physical problems which can be solved using optimization methods.
In the field of inventing new algorithms, conversely computer scientists can profit from developments in the physics community. Several techniques, which originated in physical problems or physical techniques, have been applied recently in different areas. The prototypical example is the simulated annealing method, which simulates the slow cooling of an experimental sample to find low energy states. This technique has been applied to many problems from other fields, like the traveling salesman problem or optimization of production schedules. Recently several enhancements of simulated annealing have been developed. Examples are the parallel tempering approach, where several systems are kept in parallel at different temperatures, and the multicanonical ensemble, where the temperature of the sample is allowed to fluctuate according a certain problem-adjusted recipe. Also other concepts from physics have led to the development of new algorithms. One example are renormalization-group based approaches, where the target function is optimized iteratively on different length scales. All these new methods will strongly enhance the efficiency of physics-based algorithms and enlarge greatly the range of applications.
A second emphasis of this workshop was the study of optimization problems from theoretical computer science using concepts and methods from statistical physics. Widely studied problems are the satisfiability problem (SAT), where one asks whether for a given boolean formula there exists an assignment of the variables satisfying all constraints, and the vertex-cover problem (VC), where one seeks for the distribution of marks in a graph such that each edge is adjacent to at least on mark. Both SAT and VC exhibit, like many other problems, phase transitions in a suitable parametrized ensemble of random instances. Thus, many methods invented in statistical physics to study phase transitions can be applied to problems from theoretical computer science, leading to results which could not be found before using traditional methods from mathematics. For example, SAT and the VC have been treated using the replica method, which was originally used to study the aforementioned spin glass problems analytically. Since there are more than 50000 NP-complete problems, many of them unknown to physicists, much work has still to be done in this field.
Interestingly, these phase transitions coincide very often with peaks of the running time or with changes of the typical-case complexity from polynomial to exponential. Hence, from studying this problems, one learns also a lot on the typical time complexity of algorithms. Recently, using the physical approaches, the complexity of simple complete SAT and VC algorithms could be analytically computed for the first time. In this area significant progress has been reported in various presentations in this seminar.
Finally, a part of this workshop was dedicated to bioinformatics. In this field, researchers from biology, computer science and physics cooperate in a most fruitful way. Algorithms provided by computer science and analytical methods and concepts from physics help to elucidate many problems from molecular biology. Examples are the study of protein structures and their dynamics or the prediction of secondary structures. Recently, using a mapping onto a physical system and by applying optimization algorithms, the rare-event statistics of sequence alignment could be studied, a method used to compare DNA and proteins stored in huge data bases.
All the example given above show that by combining the efforts from computer science and physics substantial progress has been made in the recent years and more can be expected in the future. The participants as well as the organizers had the impression that this workshop contributed to this development and gave all participants many opportunities for cross-community work and interdisciplinary collaborations.
The scientific highlights of the seminar were represented by the following key note speakers:
- Marc Mézard (LPTMS - Orsay): Statistical Physics of the Satisfiability Problem: Survey Propagation , where a new and very efficient algorithm for satisfiability problems were presented.
- Remi Monasson (CNRS, Paris): Towards an Analysis of Average Case Properties of Backtrack Algorithms for Random Decision Problems , where the performance of backtracking algorithms was analyzed with tools from statistical physics.
- Frauke Liers (Universität zu Köln): Exact Ground States of Ising Spin Glasses , where the recent remarkable progress in the exact computation of 3-dimensional spin glass ground states using branch-and-cut algorithms was reported.
- Martin Weigt (Universität Göttingen) Solving Satisfiability Problems by Fluctuations: An Approximate Description of Stochastic Local Search Algorithm s, where the latter were analyzed with methods from statistical physics.
- Jean-Christian Angles d'Auriac (Grenoble) Minimization of Sub-Modular Function: Application to the Potts Model , where a polynomial algorithm for calculating the partition function of the infinite state random bond Potts model was presented.
- David Saad (Aston University, Birmingham) A statistical mechanics based analysis of coded CDMA with regular LDPC codes , where communication codes were analyzed again with tools known from statistical physics.
Training:
6 young researchers were participating in the seminar and all of them had oral contributions of 40 minutes length. They interacted lively with the senior scientists, there was no generation barrier.
European added value:
Half of the 28 participants came from foreign countries, mainly France, Italy, Suisse and the US. Also ca. one half of the participants are integrated in European networks: The SPHINX program (Statistical physics of glassy and non-equilibrium systems) of the European Science Foundation of the European science foundation and the DYGLAGEMEM network (Dynamics and statics of glasses and spin glasses) of the European Community. So networking on an European level was guaranteed from the beginning.
Additional information:
A number of key note speakers were asked 1 year before the event to deliver a 20-30 pages self-contained paper on their planned contribution, which then will be edited by two of the organizers (A. Hartmann and H. Rieger) as a book with the title {\it New optimization algorithms in physics} and and published by Wiley.
Nearly three years earlier, in December 2001, the Dagstuhl Seminar "Algorithmic Techniques in Physics (II)" took place. Researchers from Computer Science, Mathematics and Physics came together to discuss about algorithmic problems occurring in physics and physical concepts that might be useful in computer science. Bringing together people from three different areas was an experiment that, as all participants agreed in the end, turned out to be a success and, more importantly, should be repeated in the future.
In the field of optimization, the interactions between computer scientists and physicist are strongly growing. This is due to an increasing number of optimization methods applied to problems from physics and, on the other hand, due to concepts and methods from statistical physics which are recently being applied to study optimization problems occurring in theoretical computer science. Still, many algorithms or problems are only known in one field. Hence, computer scientists as well as physicist could profit greatly by participating in this workshop, which aims to spread knowledge to other fields, respectively and to encourage new projects and cooperations.
In recent years, several very efficient exact optimization algorithms have been developed in the computer science community. Examples are maximum flow algorithms, minimum-cost flow techniques, matching methods, which all are graph theoretical approaches or sophisticated branch-and-cut methods, originating in the field of linear optimization. These algorithms have now been applied to problems from physics like for random magnetic materials (random-field systems, spin glasses), in surface physics (solid-on-solid models) and many other disordered systems. The system sizes which can be treated are now much larger than ten years before, allowing the obtain now more reliable and higher significant data.
Also several heuristic approaches have found applications in physics. An example are genetic algorithms, which mimic the optimization of species in an evolutionary process to find very good approximation of the global minima of complicated functions. Genetic algorithms have been recently applied to study systems ranging from the largest sizes, from galaxies to quantum systems. Recently, simple but nevertheless very efficient variants of genetic algorithms have been developed and were presented in the seminar.
A second emphasis of this workshop was the study of optimization problems from theoretical computer science using concepts and methods from statistical physics. Widely studied problems are the satisfiability problem (SAT), where one asks whether for a given boolean formula there exists an assignment of the variables satisfying all constraints, and the vertex-cover problem (VC), where one seeks for the distribution of marks in a graph such that each edge is adjacent to at least on mark. Both SAT and VC exhibit, like many other problems, phase transitions in a suitable parametrized ensemble of random instances. Thus, many methods invented in statistical physics to study phase transitions can be applied to problems from theoretical computer science, leading to results which could not be found before using traditional methods from mathematics.
- Rudolf Ahlswede (Universität Bielefeld, DE)
- Mikko Alava (Helsinki University of Technology, FI)
- Carlo Amoruso (Universität Göttingen, DE)
- Jean-Christian Angles d'Auriac (Very Low Temperatures Research Center, FR)
- Wolfgang Barthel (Universität Göttingen, DE)
- Michel Bauer (CEA - Gif sur Yvette, FR)
- Heiko Bauke (Universität Magdeburg, DE)
- Stefan Boettcher (Emory University - Atlanta, US)
- Bernd Burghardt (Universität Göttingen, DE)
- Phillip Duxbury (Michigan State University, US)
- Andreas Engel (Universität Magdeburg, DE)
- Karoly Férenc Pal (Institute of Nuclear Reseach - Debrecen, HU)
- Peter Grassberger (Jülich Supercomputing Centre, DE)
- Ulrich H.E. Hansmann (Michigan Technological University, US)
- Alexander K. Hartmann (Universität Göttingen, DE)
- Karl Heinz Hoffmann (TU Chemnitz, DE)
- Wolfhard Janke (Universität Leipzig, DE)
- Michael Jünger (Universität Köln, DE) [dblp]
- Helmut G. Katzgraber (ETH Zürich, CH)
- Mathias Körner (ETH Zürich, CH)
- Michele Leone (Villa Gualino - Torino, IT)
- Frauke Liers (Universität Köln, DE) [dblp]
- Martin Loebl (Charles University - Prague, CZ) [dblp]
- Enzo Marinari (Sapienza University of Rome, IT)
- Kurt Mehlhorn (MPI für Informatik - Saarbrücken, DE) [dblp]
- Jan Meinke (Syracuse University, US)
- Stephan Mertens (Universität Magdeburg, DE)
- Marc Mézard (University Paris Sud, FR)
- Alan Middleton (Syracuse University, US)
- Remi Monasson (CNRS - Paris, FR) [dblp]
- Cristian Moukarzel (CINVESTAV - Merida, MX)
- Matteo Palassini (University Paris Sud, FR)
- Heiko Rieger (Universität des Saarlandes, DE)
- David Saad (Aston University - Birmingham, GB)
- Uwe Schöning (Universität Ulm, DE) [dblp]
- Guilhem Semerjian (ENS - Paris, FR)
- Martin Weigt (Universität Göttingen, DE) [dblp]