TOP
Suche auf der Schloss Dagstuhl Webseite
Sie suchen nach Informationen auf den Webseiten der einzelnen Seminare? - Dann:
Nicht fündig geworden? - Einige unserer Dienste laufen auf separaten Webseiten mit jeweils eigener Suche. Bitte beachten Sie folgende Liste:
Schloss Dagstuhl - LZI - Logo
Schloss Dagstuhl Services
Seminare
Innerhalb dieser Seite:
Externe Seiten:
  • DOOR (zum Registrieren eines Dagstuhl Aufenthaltes)
  • DOSA (zum Beantragen künftiger Dagstuhl Seminare oder Dagstuhl Perspektiven Workshops)
Publishing
Innerhalb dieser Seite:
Externe Seiten:
dblp
Innerhalb dieser Seite:
Externe Seiten:
  • die Informatik-Bibliographiedatenbank dblp


Dagstuhl-Seminar 23061

Scheduling

( 05. Feb – 10. Feb, 2023 )

(zum Vergrößern in der Bildmitte klicken)

Permalink
Bitte benutzen Sie folgende Kurz-Url zum Verlinken dieser Seite: https://www.dagstuhl.de/23061

Organisatoren

Kontakt


Impacts

Programm

Summary

This Dagstuhl Seminar was the seventh in a series of Dagstuhl "Scheduling" seminars (since 2008). Scheduling is a major research field that is studied from a practical and theoretical perspective in computer science, mathematical optimization, and operations research. Applications range from traditional production scheduling and project planning to the newly arising resource management tasks in the advent of internet technology and shared resources. Despite the remarkable progress on algorithmic theory for fundamental scheduling problems, new questions gain greater prominence due to the rise of new applications and new methodologies.

At this meeting, we focused on the emerging models for beyond-worst case analysis, especially for models that incorporate learning. Decades of research have produced algorithmic discoveries along with impossibility results for scheduling. The primary directions have been focused on achieving the best performance in worst-case scenarios, in terms of run time, memory usage, and proximity to the optimum. Unfortunately, it has often been observed that such algorithms do not necessarily perform the best in practice. Closing the gap between theory and practice is often impossible using the standard approach due to lower bound barriers on worst-case instances.

To bridge the gap between theory and practice, there has been an effort to understand algorithms through a new lens, called beyond-worst-case algorithm analysis. Initial work in this area has produced exciting algorithmic insights and has given convincing explanations for why some algorithms work well in practice. Moreover, the area has lead to interesting new algorithmic insights and theoretical directions.

Several of the emerging beyond-worst-case analysis models incorporate learning. Developing the understanding of the models have the promise of discovering new algorithmic insights into scheduling problems and improved performance. This seminar focused on the following three themes.

  • Scheduling with Error-Prone Learned Predictions. This theme focuses on the settings where the algorithm has access to (e.g. machine-learned) predictions. In this model, an algorithm is given access to a prediction about the problem instance. The performance of the algorithm is parameterized by the quality of this prediction. Typically, an algorithm is given access to an error-prone prediction as machine learning is often imperfect. This prediction can then be leveraged to make algorithmic decisions. Ideally, (1) the predictions result in better performance than the best worst-case bound; (2) the algorithm never performs asymptotically worse than the best worst-case algorithm even if the prediction error is large; and (3) in-between, there is a graceful degradation in performance as the error in the prediction becomes worse. For example, the competitive ratio, approximation ratio or running time can be parameterized by the prediction error.
  • Data-Driven Algorithm Design. Practitioners often do not use algorithms with the best worst-case performance guarantees directly. Rather, they investigate several algorithms and optimize over them to find the one that works the best for the given application. That is, input data is used to determine the best algorithm (empirically) to use for a given application. Motivated by this, recent research seeks to give a theoretical foundation for this kind of data-driven algorithm selection.
    In this setting, a family or class of algorithms are defined for a problem. This family is constructed by the algorithm designer. The designer has access to prior instances of the problem that are drawn from a fixed but unknown probability distribution over instances. The goal is to choose an algorithm from the family that will have the best expected performance. That is, to learn from prior instances and decide on the best algorithm for future instances. This model is closely related to practice. Indeed, it is often the case that a scheduling system will have access to job instances from prior days when making scheduling decisions on the current day.
  • Stochastic and Bayesian Learning in Scheduling. Similar to the learning-augmented setting, we wish to understand how to best incorporate uncertain knowledge about the future in algorithm design. The main difference lies in that, while the learning-augmented framework makes no assumptions on the quality of predictions, the stochastic setting assumes that the data is drawn from a probability distribution. The distribution can either be known upfront or can be learned over time. Questions of interest include:
    • If we know the distribution of data, how can we best use it in the design of algorithms?
    • If the distribution of data is learned over time, how can we best make decisions while being adaptive to new knowledge of the data?
    The scheduling community has recently made striking contributions to both these questions. When the data distribution is known up front - distribution of processing times in this case - there has been major progress on algorithms for scheduling jobs on multiple machines. This is an interesting practical and theoretical question as the processing time of one job may affect the completion of another. An interesting extension to the above question is when the algorithm is allowed some adaptivityin a second stage: after computing a schedule based on the distributional knowledge as above, the real processing times are revealed and the algorithm can {adapt} by making small changes to the schedule. In some scheduling settings, it is known that near-optimal scheduling algorithms can be efficiently computed under the assumption that jobs arrive according to a well-understood probability distribution, such as a Poisson. In contrast, often in the worst case, there are strong lower bounds showing all online algorithms must be far from optimal. The goal is to achieve better performance under the assumption that jobs arrive according to a known probability distribution.

Organization of the Seminar. The seminar brought together 43 researchers from theoretical computer science, mathematical optimization, and operations research. The participants consisted of both senior and junior researchers, including a number of postdocs and advanced PhD students.

During the five days of the seminar, 29 talks of different lengths took place. Six keynote speakers gave an overview of the state-of-the art of the respective area resp. presented recent highlight results in 60 minutes:

  • Siddhartha Banerjee: We Need to Talk About How we Talk About Online Decision-Making
  • Sami Davies: Scheduling with Communication Delays
  • Silvio Lattanzi: Clustering with Advice
  • Claire Mathieu: Stable matching in Practice
  • Debmalya Panigrahi: Santa Claus with Predictions
  • Sergei Vassilvitskii: Five(ish) Years of Algorithms with Predictions.

The remaining slots were filled with shorter talks of $30$ minutes on various topics related to scheduling, resource allocation, and related problems, from the perspective of coping with uncertainty, learning, and applications in practice.

Further, in the beginning of the week, open problem sessions were held. One of the open problem sessions was devoted to problems that would have been of interest to Gerhard Woeginger, pivotal member of the community that passed away in 2022. Throughout the week, a few sessions with spotlight talks of 8 minutes gave participants the chance to announce recent results and invite for discussions. The schedule left ample free time that was actively used for fruitful discussions and joint research.

Outcome. Organizers and participants regard the seminar as a great success. The seminar achieved the goal to bring together the related communities, share the state-of-the art research and discuss the current major challenges. The talks were excellent and stimulating; participants actively met in working groups in the afternoon and evenings. It was remarked positively that a significant number of younger researchers (postdocs and PhD students) participated and integrated well.

Acknowledgements. The organizers wish to express their gratitude towards the Scientific Directorate and the administration of the Dagstuhl Center for their great support for this seminar.

Copyright Nicole Megow, Benjamin J. Moseley, David Shmoys, Ola Svensson, and Sergei Vassilvitskii

Motivation

Scheduling is a major research field that is studied from a practical and theoretical perspective in computer science, mathematical optimization and operations research. Applications range from traditional production scheduling and project planning to the newly arising resource management tasks in the area of internet technology such as distributed cloud service networks and the allocation of virtual machines to physical servers. Algorithms for scheduling problems have been one of the richest areas of algorithmic research, spanning nearly 70 years of work. Throughout, research has been prompted by the fact that in most settings, these computational problems are quite challenging, and new approaches and frameworks are continually being added to help tackle a broadening portfolio of scheduling problems.

At this Dagstuhl Seminar, we focus on the emerging models for beyond-worst case algorithm design, in particular, recent approaches that incorporate learning. Several models for the integration of learning into algorithm design have been proposed and have already demonstrated advances in the state-of-art for various scheduling applications. This seminar will focus on three established themes:

  1. Scheduling with error-prone learned predictions
  2. Data-driven algorithm design
  3. Stochastic and bayesian learning in scheduling

The seminar aims to bring together researchers working on distinct areas to encourage cross-fertilization among different research directions. As the field of learning is very broad, we methodologically focus on the theoretical design of algorithms with provable performance guarantees.

Copyright Nicole Megow, Benjamin J. Moseley, David Shmoys, Ola Svensson, and Sergei Vassilvitskii

Teilnehmer
  • Antonios Antoniadis (University of Twente, NL) [dblp]
  • Yossi Azar (Tel Aviv University, IL) [dblp]
  • Etienne Bamas (EPFL - Lausanne, CH)
  • Siddhartha Banerjee (Cornell University - Ithaca, US)
  • Sanjoy Baruah (Washington University - St. Louis, US) [dblp]
  • Sami Davies (Northwestern University - Evanston, US)
  • Christoph Dürr (Sorbonne University - Paris, FR) [dblp]
  • Franziska Eberle (London School of Economics, GB) [dblp]
  • Daniel Freund (MIT - Camridge, US)
  • Naveen Garg (Indian Institute of Technology - New Dehli, IN) [dblp]
  • Vineet Goyal (Columbia University, US)
  • Thomas Kesselheim (Universität Bonn, DE) [dblp]
  • Samir Khuller (Northwestern University - Evanston, US) [dblp]
  • Amit Kumar (Indian Institute of Technology - New Dehli, IN) [dblp]
  • Silvio Lattanzi (Google - Barcelona, ES) [dblp]
  • Thomas Lavastida (University of Texas - Dallas, US)
  • Alex Lindermayr (Universität Bremen, DE)
  • Alberto Marchetti-Spaccamela (Sapienza University of Rome, IT) [dblp]
  • Claire Mathieu (CNRS - Paris, FR) [dblp]
  • Nicole Megow (Universität Bremen, DE) [dblp]
  • Benjamin J. Moseley (Carnegie Mellon University - Pittsburgh, US) [dblp]
  • Seffi Naor (Technion - Haifa, IL) [dblp]
  • Debmalya Panigrahi (Duke University - Durham, US) [dblp]
  • Kirk Pruhs (University of Pittsburgh, US) [dblp]
  • Lars Rohwedder (Maastricht University, NL) [dblp]
  • Thomas Rothvoss (University of Washington - Seattle, US) [dblp]
  • Kevin Schewior (University of Southern Denmark - Odense, DK) [dblp]
  • Jens Schlöter (Universität Bremen, DE)
  • Jiri Sgall (Charles University - Prague, CZ) [dblp]
  • David Shmoys (Cornell University - Ithaca, US) [dblp]
  • Martin Skutella (TU Berlin, DE) [dblp]
  • Frits C. R. Spieksma (TU Eindhoven, NL) [dblp]
  • Clifford Stein (Columbia University, US) [dblp]
  • Leen Stougie (CWI - Amsterdam, NL) [dblp]
  • Ola Svensson (EPFL - Lausanne, CH) [dblp]
  • Chaitanya Swamy (University of Waterloo, CA)
  • Marc Uetz (University of Twente - Enschede, NL) [dblp]
  • Ali Vakilian (TTIC - Chicago, US) [dblp]
  • Sergei Vassilvitskii (Google - New York, US) [dblp]
  • Jose Verschae (PUC - Santiago de Chile, CL) [dblp]
  • Tjark Vredeveld (Maastricht Univ. School of Business & Economics, NL) [dblp]
  • Andreas Wiese (TU München, DE) [dblp]
  • Rudy Zhou (Carnegie Mellon University - Pittsburgh, US)

Verwandte Seminare
  • Dagstuhl-Seminar 08071: Scheduling (2008-02-10 - 2008-02-15) (Details)
  • Dagstuhl-Seminar 10071: Scheduling (2010-02-14 - 2010-02-19) (Details)
  • Dagstuhl-Seminar 13111: Scheduling (2013-03-10 - 2013-03-15) (Details)
  • Dagstuhl-Seminar 16081: Scheduling (2016-02-21 - 2016-02-26) (Details)
  • Dagstuhl-Seminar 18101: Scheduling (2018-03-04 - 2018-03-09) (Details)
  • Dagstuhl-Seminar 20081: Scheduling (2020-02-16 - 2020-02-21) (Details)
  • Dagstuhl-Seminar 25121: Scheduling (2025-03-16 - 2025-03-21) (Details)

Klassifikation
  • Data Structures and Algorithms
  • Machine Learning

Schlagworte
  • scheduling
  • mathematical optimization
  • approximation algorithms
  • learning methods
  • uncertainty