TY - JOUR
T1 - External-memory Dictionaries in the Affine and PDAM Models
AU - Bender, Michael A.
AU - Conway, Alex
AU - Farach-Colton, Martín
AU - Jannen, William
AU - Jiao, Yizheng
AU - Johnson, Rob
AU - Knorr, Eric
AU - McAllister, Sara
AU - Mukherjee, Nirjhar
AU - Pandey, Prashant
AU - Porter, Donald E.
AU - Yuan, Jun
AU - Zhan, Yang
N1 - Publisher Copyright:
© 2021 Association for Computing Machinery.
PY - 2021/9
Y1 - 2021/9
N2 - Storage devices have complex performance profiles, including costs to initiate IOs (e.g., seek times in hard drives), parallelism and bank conflicts (in SSDs), costs to transfer data, and firmware-internal operations.The Disk-access Machine (DAM) model simplifies reality by assuming that storage devices transfer data in blocks of size B and that all transfers have unit cost. Despite its simplifications, the DAM model is reasonably accurate. In fact, if B is set to the half-bandwidth point, where the latency and bandwidth of the hardware are equal, then the DAM approximates the IO cost on any hardware to within a factor of 2.Furthermore, the DAM model explains the popularity of B-trees in the 1970s and the current popularity of BI -trees and log-structured merge trees. But it fails to explain why some B-trees use small nodes, whereas all BI -trees use large nodes. In a DAM, all IOs, and hence all nodes, are the same size.In this article, we show that the affine and PDAM models, which are small refinements of the DAM model, yield a surprisingly large improvement in predictability without sacrificing ease of use. We present benchmarks on a large collection of storage devices showing that the affine and PDAM models give good approximations of the performance characteristics of hard drives and SSDs, respectively.We show that the affine model explains node-size choices in B-trees and BI -trees. Furthermore, the models predict that B-trees are highly sensitive to variations in the node size, whereas BI -trees are much less sensitive. These predictions are born out empirically.Finally, we show that in both the affine and PDAM models, it pays to organize data structures to exploit varying IO size. In the affine model, BI -trees can be optimized so that all operations are simultaneously optimal, even up to lower-order terms. In the PDAM model, BI -trees (or B-trees) can be organized so that both sequential and concurrent workloads are handled efficiently.We conclude that the DAM model is useful as a first cut when designing or analyzing an algorithm or data structure but the affine and PDAM models enable the algorithm designer to optimize parameter choices and fill in design details.
AB - Storage devices have complex performance profiles, including costs to initiate IOs (e.g., seek times in hard drives), parallelism and bank conflicts (in SSDs), costs to transfer data, and firmware-internal operations.The Disk-access Machine (DAM) model simplifies reality by assuming that storage devices transfer data in blocks of size B and that all transfers have unit cost. Despite its simplifications, the DAM model is reasonably accurate. In fact, if B is set to the half-bandwidth point, where the latency and bandwidth of the hardware are equal, then the DAM approximates the IO cost on any hardware to within a factor of 2.Furthermore, the DAM model explains the popularity of B-trees in the 1970s and the current popularity of BI -trees and log-structured merge trees. But it fails to explain why some B-trees use small nodes, whereas all BI -trees use large nodes. In a DAM, all IOs, and hence all nodes, are the same size.In this article, we show that the affine and PDAM models, which are small refinements of the DAM model, yield a surprisingly large improvement in predictability without sacrificing ease of use. We present benchmarks on a large collection of storage devices showing that the affine and PDAM models give good approximations of the performance characteristics of hard drives and SSDs, respectively.We show that the affine model explains node-size choices in B-trees and BI -trees. Furthermore, the models predict that B-trees are highly sensitive to variations in the node size, whereas BI -trees are much less sensitive. These predictions are born out empirically.Finally, we show that in both the affine and PDAM models, it pays to organize data structures to exploit varying IO size. In the affine model, BI -trees can be optimized so that all operations are simultaneously optimal, even up to lower-order terms. In the PDAM model, BI -trees (or B-trees) can be organized so that both sequential and concurrent workloads are handled efficiently.We conclude that the DAM model is useful as a first cut when designing or analyzing an algorithm or data structure but the affine and PDAM models enable the algorithm designer to optimize parameter choices and fill in design details.
KW - External memory
KW - performance models
KW - write optimization
UR - http://www.scopus.com/inward/record.url?scp=85115645832&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85115645832&partnerID=8YFLogxK
U2 - 10.1145/3470635
DO - 10.1145/3470635
M3 - Article
AN - SCOPUS:85115645832
SN - 2329-4949
VL - 8
JO - ACM Transactions on Parallel Computing
JF - ACM Transactions on Parallel Computing
IS - 3
M1 - 15
ER -