TY - GEN
T1 - An Associativity Threshold Phenomenon in Set-Associative Caches
AU - Bender, Michael A.
AU - Das, Rathish
AU - Farach-Colton, Martín
AU - Tagliavini, Guido
N1 - Publisher Copyright:
© 2023 ACM.
PY - 2023/6/17
Y1 - 2023/6/17
N2 - In an α-way set-associative cache, the cache is partitioned into disjoint sets of size α, and each item can only be cached in one set, typically selected via a hash function. Set-associative caches are widely used and have many benefits, e.g., in terms of latency or concurrency, over fully associative caches, but they often incur more cache misses. As the set size α decreases, the benefits increase, but the paging costs worsen. In this paper we characterize the performance of an α-way set-associative LRU cache of total size k, as a function of α = α(k). We prove the following, assuming that sets are selected using a fully random hash function: For α = ω(log k), the paging cost of an α-way set-associative LRU cache is within additive O(1) of that a fully-associative LRU cache of size (1-o(1))k, with probability 1 - 1 / poly (k), for all request sequences of length poly (k). For α = o(log k), and for all c = O(1) and r = O(1), the paging cost of an α-way set-associative LRU cache is not within a factor c of that a fully-associative LRU cache of size k/r, for some request sequence of length O(k1.01). For α = ω(log k), if the hash function can be occasionally changed, the paging cost of an α-way set-associative LRU cache is within a factor 1 + o(1) of that a fully-associative LRU cache of size (1-o(1))k, with probability 1 - 1/poly (k), for request sequences of arbitrary (e.g., super-polynomial) length. Some of our results generalize to other paging algorithms besides LRU, such as least-frequently used (LFU).
AB - In an α-way set-associative cache, the cache is partitioned into disjoint sets of size α, and each item can only be cached in one set, typically selected via a hash function. Set-associative caches are widely used and have many benefits, e.g., in terms of latency or concurrency, over fully associative caches, but they often incur more cache misses. As the set size α decreases, the benefits increase, but the paging costs worsen. In this paper we characterize the performance of an α-way set-associative LRU cache of total size k, as a function of α = α(k). We prove the following, assuming that sets are selected using a fully random hash function: For α = ω(log k), the paging cost of an α-way set-associative LRU cache is within additive O(1) of that a fully-associative LRU cache of size (1-o(1))k, with probability 1 - 1 / poly (k), for all request sequences of length poly (k). For α = o(log k), and for all c = O(1) and r = O(1), the paging cost of an α-way set-associative LRU cache is not within a factor c of that a fully-associative LRU cache of size k/r, for some request sequence of length O(k1.01). For α = ω(log k), if the hash function can be occasionally changed, the paging cost of an α-way set-associative LRU cache is within a factor 1 + o(1) of that a fully-associative LRU cache of size (1-o(1))k, with probability 1 - 1/poly (k), for request sequences of arbitrary (e.g., super-polynomial) length. Some of our results generalize to other paging algorithms besides LRU, such as least-frequently used (LFU).
KW - lru
KW - paging
KW - set-associative cache
UR - http://www.scopus.com/inward/record.url?scp=85164282351&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85164282351&partnerID=8YFLogxK
U2 - 10.1145/3558481.3591084
DO - 10.1145/3558481.3591084
M3 - Conference contribution
AN - SCOPUS:85164282351
T3 - Annual ACM Symposium on Parallelism in Algorithms and Architectures
SP - 117
EP - 127
BT - SPAA 2023 - Proceedings of the 35th ACM Symposium on Parallelism in Algorithms and Architectures
PB - Association for Computing Machinery
T2 - 35th ACM Symposium on Parallelism in Algorithms and Architectures, SPAA 2023
Y2 - 17 June 2023 through 19 June 2023
ER -