TY - GEN
T1 - Data-Efficient Performance Modeling via Pre-training
AU - Liu, Chunting
AU - Baghdadi, Riyadh
N1 - Publisher Copyright:
© 2025 Copyright held by the owner/author(s).
PY - 2025/2/25
Y1 - 2025/2/25
N2 - Performance models are essential for automatic code optimization, enabling compilers to predict the effects of code transformations on performance and guide search for optimal transformations. Building state-of-the-art performance models with deep learning, however, requires vast labeled datasets of random programs - an expensive and time-consuming process, stretching over months. This paper introduces a self-supervised pre-training scheme with autoencoders to reduce the need for labeled data. By pre-training on a large dataset of random programs, the autoencoder learns representations of code and transformations, which are then used to embed programs for the performance model. Implemented in the Tiramisu autoscheduler, our approach improves model accuracy with less data. For example, to achieve a MAPE of 20.72%, the original model requires 18 million data points, whereas our method achieves a similar MAPE of 22.44% with only 3.6 million data points, reducing data requirements by 5×.
AB - Performance models are essential for automatic code optimization, enabling compilers to predict the effects of code transformations on performance and guide search for optimal transformations. Building state-of-the-art performance models with deep learning, however, requires vast labeled datasets of random programs - an expensive and time-consuming process, stretching over months. This paper introduces a self-supervised pre-training scheme with autoencoders to reduce the need for labeled data. By pre-training on a large dataset of random programs, the autoencoder learns representations of code and transformations, which are then used to embed programs for the performance model. Implemented in the Tiramisu autoscheduler, our approach improves model accuracy with less data. For example, to achieve a MAPE of 20.72%, the original model requires 18 million data points, whereas our method achieves a similar MAPE of 22.44% with only 3.6 million data points, reducing data requirements by 5×.
KW - automatic code optimization
KW - compilers
KW - deep learning
KW - performance model
KW - pre-training
KW - Tiramisu
UR - http://www.scopus.com/inward/record.url?scp=105001594596&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=105001594596&partnerID=8YFLogxK
U2 - 10.1145/3708493.3712683
DO - 10.1145/3708493.3712683
M3 - Conference contribution
AN - SCOPUS:105001594596
T3 - CC 2025 - Proceedings of the 34th ACM SIGPLAN International Conference on Compiler Construction
SP - 48
EP - 59
BT - CC 2025 - Proceedings of the 34th ACM SIGPLAN International Conference on Compiler Construction
A2 - Kluss, Daniel
A2 - Achour, Sara
A2 - Palsberg, Jens
PB - Association for Computing Machinery, Inc
T2 - 34th ACM SIGPLAN International Conference on Compiler Construction, CC 2025
Y2 - 1 March 2025 through 2 March 2025
ER -