Abstract
Logic locking is a promising design-for-trust solution that protects integrated circuits (ICs) from hardware security threats such as design intellectual property (IP) piracy and illegal overproduction of ICs. With the ubiquity of machine learning (ML), researchers have proposed various ML-based attacks against logic locking techniques in recent years. Since ML-based attacks operate as non-interpretable models, understanding the reasons behind the success/failure of such attacks is challenging. In this work, we propose SecureX, the first-of-its-kind technique that employs an explainable Graph Neural Network (GNN) to lock designs. The unique benefits of explainable GNN-based analysis include identifying the best locations in the design to lock, and the critical features (structural/functional) that make the designs vulnerable to ML-based attacks. Moreover, SecureX seamlessly integrates with state-of-the-art unbroken scan-chain protection techniques, thus thwarting oracle-guided attacks. We perform experiments on ITC-99 benchmarks and two types of locking techniques (X(N)OR/MUX-based locking) to demonstrate the efficacy of SecureX in locking designs resilient to ML/non-ML-based attacks. Our results confirm that the accuracy of the state-of-the-art ML/non-ML-based attacks drops to ≈50% while maintaining low area/power/delay overheads. Moreover, we perform a practical case study of locking an image-processing application.
Original language | English (US) |
---|---|
Pages (from-to) | 3399-3410 |
Journal | IEEE Transactions on Circuits and Systems I: Regular Papers |
Volume | 72 |
Issue number | 7 |
DOIs | |
State | Published - 2025 |
Keywords
- Explainability
- explainable GNN
- GNN
- hardware security
- logic locking
ASJC Scopus subject areas
- Hardware and Architecture
- Electrical and Electronic Engineering