Dagstuhl Seminar 25471
Online Algorithms beyond Competitive Analysis
( Nov 16 – Nov 21, 2025 )
Permalink
Organizers
- Sungjin Im (University of California - Merced, US)
- Nicole Megow (Universität Bremen, DE)
- Debmalya Panigrahi (Duke University - Durham, US)
- Sahil Singla (Georgia Institute of Technology - Atlanta, US)
Contact
- Michael Gerke (for scientific matters)
- Jutka Gasiorowski (for administrative matters)
In many real-world situations, the input to an algorithm is revealed piece-meal over time, and algorithmic decisions must be taken without knowledge of future inputs. This has given rise to the field of online algorithms. Traditionally, the performance of an online algorithm is measured by comparing the quality of the solution produced by the algorithm to that of an offline optimal solution. This framework, called competitive analysis, has led to a beautiful theory of worst-case analysis for online algorithms over the last few decades. At the same time, it has also been painfully obvious that this model is overly pessimistic in many situations and leads to strong lower bounds that bucket together algorithms with significant differences in observed empirical performance. In recent years, there has been an increasing push to go beyond worst-case competitive analysis of online algorithms, and several models and frameworks have been proposed that have attracted much research activity. This Dagstuhl Seminar seeks to bring together the leaders in online algorithms research to discuss and debate these approaches, helping chart the future of online algorithms for the foreseeable future.
We identify three key directions for online algorithms that the seminar will focus on. Each of these directions goes beyond traditional worst-case analysis in different ways; collectively, they appear particularly promising based on recent advances. The first is online algorithms with predictions. Here, the algorithm is augmented with predictions about the unknown future, but the predictions may be noisy in general. The goal is to design online algorithms that automatically take advantage of accurate predictions to go beyond worst-case lower bounds, but at the same time do no worse than traditional online algorithms even if the predictions are arbitrarily inaccurate. This offers a smooth transition between offline and online algorithms based on the quality of information available offline to the algorithm. The second direction is that of online and dynamic algorithms with recourse. Here, the online algorithm is not provided any additional information, but can make a limited number of changes to previous decisions based on newly available information. Again, this offers a smooth transition between offline and online models: if the algorithm is allowed to entirely recompute its solution in every online step, then the model reduces to the offline setting, while it behaves like a traditional online algorithm when it is not given any recourse budget. The final direction is that of online stochastic input models. Here, the algorithm is not evaluated on a worst-case input; rather, one assumes that the input is drawn from some probability distribution which limits the adversary’s ability to produce arbitrary inputs in some way. For instance, it may be the case that the adversary must declare the distribution (but not the input itself) to the algorithm at the very outset, i.e., offline. Or, the input in different steps are drawn independently from an identical distribution; while the algorithm does not know the distribution per se, it has the opportunity to learn in the long run. We hope that by focusing on these themes, the seminar will help set the agenda for online algorithms researchers over the next several years.
Classification
- Data Structures and Algorithms
Keywords
- online algorithms
- beyond worst-case algorithm design
- algorithms under uncertainty