TY - GEN
T1 - On the Amortized Complexity of Approximate Counting
AU - Aden-Ali, Ishaq
AU - Han, Yanjun
AU - Nelson, Jelani
AU - Yu, Huacheng
N1 - Publisher Copyright:
© Ishaq Aden-Ali, Yanjun Han, Jelani Nelson, and Huacheng Yu.
PY - 2024/9
Y1 - 2024/9
N2 - Naively storing a counter up to value n would require Ω(log n) bits of memory. Nelson and Yu [9], following work of Morris [8], showed that if the query answers need only be (1 + ϵ)-approximate with probability at least 1 − δ, then O(log log n + log log(1/δ) + log(1/ϵ)) bits suffice, and in fact this bound is tight. Morris’ original motivation for studying this problem though, as well as modern applications, require not only maintaining one counter, but rather k counters for k large. This motivates the following question: for k large, can k counters be simultaneously maintained using asymptotically less memory than k times the cost of an individual counter? That is to say, does this problem benefit from an improved amortized space complexity bound? We answer this question in the negative. Specifically, we prove a lower bound for nearly the full range of parameters showing that, in terms of memory usage, there is no asymptotic benefit possible via amortization when storing multiple counters. Our main proof utilizes a certain notion of “information cost” recently introduced by Braverman, Garg and Woodruff [2] to prove lower bounds for streaming algorithms.
AB - Naively storing a counter up to value n would require Ω(log n) bits of memory. Nelson and Yu [9], following work of Morris [8], showed that if the query answers need only be (1 + ϵ)-approximate with probability at least 1 − δ, then O(log log n + log log(1/δ) + log(1/ϵ)) bits suffice, and in fact this bound is tight. Morris’ original motivation for studying this problem though, as well as modern applications, require not only maintaining one counter, but rather k counters for k large. This motivates the following question: for k large, can k counters be simultaneously maintained using asymptotically less memory than k times the cost of an individual counter? That is to say, does this problem benefit from an improved amortized space complexity bound? We answer this question in the negative. Specifically, we prove a lower bound for nearly the full range of parameters showing that, in terms of memory usage, there is no asymptotic benefit possible via amortization when storing multiple counters. Our main proof utilizes a certain notion of “information cost” recently introduced by Braverman, Garg and Woodruff [2] to prove lower bounds for streaming algorithms.
KW - approximate counting
KW - information complexity
KW - lower bounds
KW - streaming
UR - http://www.scopus.com/inward/record.url?scp=85204502178&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85204502178&partnerID=8YFLogxK
U2 - 10.4230/LIPIcs.APPROX/RANDOM.2024.33
DO - 10.4230/LIPIcs.APPROX/RANDOM.2024.33
M3 - Conference contribution
AN - SCOPUS:85204502178
T3 - Leibniz International Proceedings in Informatics, LIPIcs
BT - Approximation, Randomization, and Combinatorial Optimization. Algorithms and Techniques, APPROX/RANDOM 2024
A2 - Kumar, Amit
A2 - Ron-Zewi, Noga
PB - Schloss Dagstuhl- Leibniz-Zentrum fur Informatik GmbH, Dagstuhl Publishing
T2 - 27th International Conference on Approximation Algorithms for Combinatorial Optimization Problems, APPROX 2024 and the 28th International Conference on Randomization and Computation, RANDOM 2024
Y2 - 28 August 2024 through 30 August 2024
ER -