Data-Efficient Performance Modeling via Pre-training

Chunting Liu, Riyadh Baghdadi

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Performance models are essential for automatic code optimization, enabling compilers to predict the effects of code transformations on performance and guide search for optimal transformations. Building state-of-the-art performance models with deep learning, however, requires vast labeled datasets of random programs - an expensive and time-consuming process, stretching over months. This paper introduces a self-supervised pre-training scheme with autoencoders to reduce the need for labeled data. By pre-training on a large dataset of random programs, the autoencoder learns representations of code and transformations, which are then used to embed programs for the performance model. Implemented in the Tiramisu autoscheduler, our approach improves model accuracy with less data. For example, to achieve a MAPE of 20.72%, the original model requires 18 million data points, whereas our method achieves a similar MAPE of 22.44% with only 3.6 million data points, reducing data requirements by 5×.

Original languageEnglish (US)
Title of host publicationCC 2025 - Proceedings of the 34th ACM SIGPLAN International Conference on Compiler Construction
EditorsDaniel Kluss, Sara Achour, Jens Palsberg
PublisherAssociation for Computing Machinery, Inc
Pages48-59
Number of pages12
ISBN (Electronic)9798400714078
DOIs
StatePublished - Feb 25 2025
Event34th ACM SIGPLAN International Conference on Compiler Construction, CC 2025 - Las Vegas, United States
Duration: Mar 1 2025Mar 2 2025

Publication series

NameCC 2025 - Proceedings of the 34th ACM SIGPLAN International Conference on Compiler Construction

Conference

Conference34th ACM SIGPLAN International Conference on Compiler Construction, CC 2025
Country/TerritoryUnited States
CityLas Vegas
Period3/1/253/2/25

Keywords

  • automatic code optimization
  • compilers
  • deep learning
  • performance model
  • pre-training
  • Tiramisu

ASJC Scopus subject areas

  • Hardware and Architecture
  • Signal Processing
  • Software

Fingerprint

Dive into the research topics of 'Data-Efficient Performance Modeling via Pre-training'. Together they form a unique fingerprint.

Cite this