Abstract
Many dynamic graph algorithms have an amortized update time, rather than a stronger worst-case guarantee. But amortized data structures are not suitable for real-time systems, where each individual operation has to be executed quickly. For this reason, there exist many recent randomized results that aim to provide a guarantee stronger than amortized expected. The strongest possible guarantee for a randomized algorithm is that it is always correct (Las Vegas), and has high-probability worst-case update time, which gives a bound on the time for each individual operation that holds with high probability. In this paper we present the first polylogarithmic high-probability worst-case time bounds for the dynamic spanner and the dynamic maximal matching problem. 1. For dynamic spanner, the only known o(n) worst-case bounds were O(n3/4) high-probability worst-case update time for maintaining a 3-spanner and O(n5/9) for maintaining a 5-spanner. We give a O(1)k log3(n) high-probability worst-case time bound for maintaining a (2k − 1)-spanner, which yields the first worst-case polylog update time for all constant k. (All the results above maintain the optimal tradeoff of stretch 2k − 1 and Õ(n1+1/k) edges.) 2. For dynamic maximal matching, or dynamic 2-approximate maximum matching, no algorithm with o(n) worst-case time bound was known and we present an algorithm with O(log5(n)) high-probability worst-case time; similar worst-case bounds existed only for maintaining a matching that was (2 + )-approximate, and hence not maximal. Our results are achieved using a new approach for converting amortized guarantees to worst-case ones for randomized data structures by going through a third type of guarantee, which is a middle ground between the two above: an algorithm is said to have worst-case expected update time α if for every update σ, the expected time to process σ is at most α. Although stronger than amortized expected, the worst-case expected guarantee does not resolve the fundamental problem of amortization: a worst-case expected update time of O(1) still allows for the possibility that every 1/f(n) updates requires Θ(f(n)) time to process, for arbitrarily high f(n). In this paper we present a black-box reduction that converts any data structure with worst-case expected update time into one with a high-probability worst-case update time: the query time remains the same, while the update time increases by a factor of O(log2(n)). Thus we achieve our results in two steps: (1) First we show how to convert existing dynamic graph algorithms with amortized expected polylogarithmic running times into algorithms with worst-case expected polylogarithmic running times. (2) Then we use our black-box reduction to achieve the polylogarithmic high-probability worst-case time bound. All our algorithms are Las-Vegas-type algorithms.
Original language | English (US) |
---|---|
Pages | 1899-1918 |
Number of pages | 20 |
DOIs | |
State | Published - 2019 |
Event | 30th Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2019 - San Diego, United States Duration: Jan 6 2019 → Jan 9 2019 |
Conference
Conference | 30th Annual ACM-SIAM Symposium on Discrete Algorithms, SODA 2019 |
---|---|
Country/Territory | United States |
City | San Diego |
Period | 1/6/19 → 1/9/19 |
ASJC Scopus subject areas
- Software
- General Mathematics