Dagstuhl Seminar 00381
Code Optimisation: Trends, Challenges and Perspectives
( Sep 17 – Sep 22, 2000 )
Permalink
Organizers
- C. Dulong (Intel, Santa Clara)
- J. Knoop (Dortmund)
- J. Pierce (Intel, Santa Clara)
- R. Gupta (Tucson)
- R. Kennedy (Tensilica, Sta. Clara)
Contact
Sponsors
External Homepage
The last decades have been witness to continuous, rapid, and far reaching progress in code optimisation. Currently, optimisation faces new challenges caused by the increasing importance of advanced programming paradigms like the object-oriented, (data-) parallel and distributed ones, the emerging dissemination of new innovative processor architectures, and the explosive proliferation of new application scenarios like Web-computing in the retinue of the meanwhile ubiquitous Internet.
New paradigms, new architectures, and new application scenarios apparently demand new compilation and optimisation techniques, but also offer new potentials for optimisation on both machine-dependent and machine-independent levels.
In the light of this situation the aim of the seminar is to bring together researchers and practitioners from both industry and academia working on any phase of optimising compilation to exchange views, share experiences, identify potentials and current and future challenges, and thus, to bridge gaps and stimulate synergies between theory and practice and between diverse areas of optimisation such as machine-dependent and machine-independent ones.
Central issues which should be discussed in the seminar are:
- Paradigm and software/hardware boundaries:
- Do we require new techniques to reasonably accommodate the specialities of new paradigms, architectures, or Web-driven application scenarios? Will unifying approaches that transcend paradigms be superior or even indispensible because of the economic demands for reusability, portability, and automatic generability? Similarly, the boundaries between hardware and software optimisations are changing and redefined as e.g., by IA64. Does the boundary lie in the right place? What are the missing architecture hooks for the compiler to really be as good as the hardware? Is that even possible?
- Optimization of running time vs. memory use:
- Will there be a renascence of storage-saving optimisations and a shift away from emphasis on running time due to the growing importance of embedded systems and the distribution of executables across the Internet?
- Static vs. dynamic and profile-guided optimisation:
- Very wide architectures are very sensitive to profile-guided optimisations. The profile data set, however, is most commonly not known at compile time. What are practical ways of gathering profile information without slowing down applications, and practical ways of using this information for dynamic optimisations? Concerning Internet-based applications, must approaches for just-in-time computation be complemented by approaches for just-in-time optimisation? What are the key issues here?
- Formal methods:
- What is the role of formal methods in code optimisation regarding the requirements of reliability, validation or even verifiability of correctness and (formal) optimality of an optimisation? What should be its role? How can the benefits possibly offered by formal methods best be combined with those of empirical evaluations?
- Mastering complexity:
- The increased complexity of compiler optimisation can lead to validation nightmares and can increased compiler team size to a counterproductive level. This phenomenon proves to be a key problem in practice. How can it be mastered? In particular, how do we avoid the problem faced by growing software and hardware teams? Can formal methods improve on this situation? What could be their impact?
- Experimental evaluations:
- Do we need a common, publicly available compiler testbed for experimental evaluations and comparisons of competing approaches? What would be the key requirements? Is it indispensable for reasonably pushing synergies between theory and practice?
- Synergies:
- What and how can people from different communities working on code optimisation, e.g. on a machine-dependent and machine-independent level, learn from each other?
At the threshold of a new millennium, and in face of the rapid change of paradigms on both the software and hardware sides, it seems to be worthwhile to take stock of the state of the art, to reflect on recent trends, and to identify current and future challenges and perspectives on code optimisation.
We believe that a Dagstuhl Seminar will provide an ideal setting for this endeavour, and will be a stimulating venue for people from all communities working seriously, but often with (too) little contact on these issues, to come together and exchange views and ideas, and to share their different experiences in order to push synergies and further progress in the field.
- C. Dulong (Intel, Santa Clara)
- J. Knoop (Dortmund)
- J. Pierce (Intel, Santa Clara)
- R. Gupta (Tucson)
- R. Kennedy (Tensilica, Sta. Clara)