Abstract
Numerous machine learning (ML), and more recently, deep-learning (DL)-based approaches, have been proposed to tackle scalability issues in electronic design automation, including those in integrated circuit (IC) test. This article examines state-of-the-art DL for IC test and highlights two critical unaddressed challenges. The first challenge involves identifying fit-for-purpose statistical metrics to train and evaluate ML model performance and usefulness in IC test. Our work shows that current metrics do not reflect how well ML models have learned to generalize and perform in the domain-specific context. From this insight, we propose and evaluate alternative metrics that better capture a model's likely usefulness in the IC test problem. The second challenge is to choose an appropriate input abstraction so as to enable an ML model to learn robust and reliable features. We investigate how well DL for IC test techniques generalize by exploring their robustness to perturbations that alter a netlist's structure but do not alter its functionality. This article provides insights into challenges via empirical evaluation of the state-of-the-art and offers guidance for future work.
Original language | English (US) |
---|---|
Pages (from-to) | 183-195 |
Number of pages | 13 |
Journal | IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems |
Volume | 41 |
Issue number | 1 |
DOIs | |
State | Published - Jan 1 2022 |
Keywords
- Adversarial perturbations
- Deep learning (DL)
- Machine learning (ML)
- VLSI testing
ASJC Scopus subject areas
- Software
- Computer Graphics and Computer-Aided Design
- Electrical and Electronic Engineering