Dagstuhl-Perspektiven-Workshop 15452
Artifact Evaluation for Publications
( 01. Nov – 04. Nov, 2015 )
Permalink
Organisatoren
- Bruce R. Childers (University of Pittsburgh, US)
- Grigori Fursin (cTuning - Cachan, FR)
- Shriram Krishnamurthi (Brown University - Providence, US)
- Andreas Zeller (Universität des Saarlandes, DE)
Kontakt
- Susanne Bach-Bernhard (für administrative Fragen)
The computer systems research (CSR) community has developed numerous artifacts that encompass a rich and diverse collection of compilers, simulators, analyzers, benchmarks, data sets and other software and data. These artifacts are used to implement research innovations, evaluate trade-offs and analyze implications. Unfortunately, the evaluation methods used for computing systems innovation can be at odds with sound science and engineering practice. In particular, ever-increasing competitiveness and expediency to publish more results poses an impediment to accountability, which is key to the scientific and engineering process. Experimental results are not typically distributed with enough information for repeatability and/or reproducibility to enable comparisons and building on the innovation. Efforts in programming languages/compilers and software engineering, computer architecture, and high-performance computing are underway to address this challenge.
This Dagstuhl Perspectives Workshop brings together leaders of these efforts and senior stakeholders of CSR sub-communities to determine synergies and to identify the promising directions and mechanisms to move the broader community toward accountability. The workshop assesses current efforts, shares what does and doesn't work, identifies additional processes, incentives and mechanisms, and determines how to coordinate and sustain the efforts. The workshop's outcome is a roadmap of actionable strategies and steps to improving accountability, leveraging investment of multiple groups, educating the community on accountability, and sharing artifacts and experiments.
Computer systems researchers have developed numerous artifacts that encompass a broad collection of software tools, benchmarks, and data sets. These artifacts are used to prototype innovations, evaluate trade-offs and analyze implications. Unfortunately, methods used in the evaluation of computing system innovation are often at odds with sound science and engineering practice. The ever-increasing pressure to publish more and more results poses an impediment to accountability, which is a key component of the scientific and engineering process. Experimental results are not usually disseminated with sufficient metadata (i.e., software extensions, data sets, benchmarks, test cases, scripts, parameters, etc.) to achieve repeatability and/or reproducibility. Without this information, issues surrounding trust, fairness and building on and comparing with previous ideas becomes problematic. Efforts in various computer systems research sub-communities, including programming languages/compilers, computer architecture, and high-performance computing, are underway to address the challenge.
This Dagstuhl Perspectives Workshop (PW) brought together stakeholders of associated CSR sub-communities to determine synergies and to identify the most promising directions and mechanisms to push the broader community toward accountability. The PW assessed current efforts, shared what does and doesn't work, identified additional processes, and determined possible incentives and mechanisms. The outcomes from the workshop, including recommendations to catalyze the community, are separately documented in an associated Dagstuhl Manifesto.
- Bruce R. Childers (University of Pittsburgh, US) [dblp]
- Neil Chue Hong (Software Sustainability Institute - Edinburgh, GB) [dblp]
- Tom Crick (Cardiff Metropolitan University, GB) [dblp]
- Jack W. Davidson (University of Virginia - Charlottesville, US) [dblp]
- Camil Demetrescu (Sapienza University of Rome, IT) [dblp]
- Roberto Di Cosmo (University Paris-Diderot, FR) [dblp]
- Jens Dittrich (Universität des Saarlandes, DE) [dblp]
- Dror Feitelson (The Hebrew University of Jerusalem, IL) [dblp]
- Sebastian Fischmeister (University of Waterloo, CA) [dblp]
- Grigori Fursin (cTuning - Cachan, FR) [dblp]
- Ashish Gehani (SRI - Menlo Park, US) [dblp]
- Matthias Hauswirth (University of Lugano, CH) [dblp]
- Marc Herbstritt (Schloss Dagstuhl, DE) [dblp]
- David R. Kaeli (Northeastern University - Boston, US) [dblp]
- Shriram Krishnamurthi (Brown University - Providence, US) [dblp]
- Anton Lokhmotov (Dividiti Ltd. - Cambridge, GB) [dblp]
- Martin Potthast (Bauhaus-Universität Weimar, DE) [dblp]
- Lutz Prechelt (FU Berlin, DE) [dblp]
- Petr Tuma (Charles University - Prague, CZ) [dblp]
- Michael Wagner (Schloss Dagstuhl, DE) [dblp]
- Andreas Zeller (Universität des Saarlandes, DE) [dblp]
Klassifikation
- hardware
- optimization / scheduling
- software engineering
Schlagworte
- Empirical Evaluation of Software Tools
- Documentation of Research Processes
- Artifact Evaluation
- Experimental Reproducibility