ConVERTS: Contrastively Learning Structurally InVariant Netlist Representations

Animesh B. Chowdhury, Jitendra Bhandari, Luca Collini, Ramesh Karri, Benjamin Tan, Siddharth Garg

Research output: Chapter in Book/Report/Conference proceedingConference contribution

Abstract

Graph neural network (GNN)-based representations of hardware designs are used in electronic design automation (EDA) tasks like logic synthesis, verification, and hardware security. While promising, state-of-the-art methods are supervised and require target labels and/or need different behavioral register transfer level (RTL) codes of the same function as training data to generalize. We propose ConVERTS, a self-supervised netlist contrastive learning method that generalizes well using one-shot RTL of a design. We demonstrate the effectiveness of ConVERTS on two use-cases: (1) netlist classification, and (2) Recovering functionality of obfuscated designs.

Original languageEnglish (US)
Title of host publication2023 ACM/IEEE 5th Workshop on Machine Learning for CAD, MLCAD 2023
PublisherInstitute of Electrical and Electronics Engineers Inc.
ISBN (Electronic)9798350309553
DOIs
StatePublished - 2023
Event5th ACM/IEEE Workshop on Machine Learning for CAD, MLCAD 2023 - Snowbird, United States
Duration: Sep 10 2023Sep 13 2023

Publication series

Name2023 ACM/IEEE 5th Workshop on Machine Learning for CAD, MLCAD 2023

Conference

Conference5th ACM/IEEE Workshop on Machine Learning for CAD, MLCAD 2023
Country/TerritoryUnited States
CitySnowbird
Period9/10/239/13/23

ASJC Scopus subject areas

  • Artificial Intelligence
  • Computer Graphics and Computer-Aided Design
  • Control and Optimization
  • Modeling and Simulation

Fingerprint

Dive into the research topics of 'ConVERTS: Contrastively Learning Structurally InVariant Netlist Representations'. Together they form a unique fingerprint.

Cite this