TY - JOUR
T1 - Robust Deep Learning for IC Test Problems
AU - Chowdhury, Animesh Basak
AU - Tan, Benjamin
AU - Garg, Siddharth
AU - Karri, Ramesh
N1 - Funding Information:
This work was supported in part by NSF under Grant 1526405 and Grant 1801495; in part by the Office of Naval Research under Grant N00014-18-1-2058; in part by the NSF Career Award; and in part by New York University/New York University Abu Dhabi CCS. The work of Benjamin Tan was supported in part by the Office of Naval Research Award under Grant N00014-18-1-2058.
Publisher Copyright:
© 1982-2012 IEEE.
PY - 2022/1/1
Y1 - 2022/1/1
N2 - Numerous machine learning (ML), and more recently, deep-learning (DL)-based approaches, have been proposed to tackle scalability issues in electronic design automation, including those in integrated circuit (IC) test. This article examines state-of-the-art DL for IC test and highlights two critical unaddressed challenges. The first challenge involves identifying fit-for-purpose statistical metrics to train and evaluate ML model performance and usefulness in IC test. Our work shows that current metrics do not reflect how well ML models have learned to generalize and perform in the domain-specific context. From this insight, we propose and evaluate alternative metrics that better capture a model's likely usefulness in the IC test problem. The second challenge is to choose an appropriate input abstraction so as to enable an ML model to learn robust and reliable features. We investigate how well DL for IC test techniques generalize by exploring their robustness to perturbations that alter a netlist's structure but do not alter its functionality. This article provides insights into challenges via empirical evaluation of the state-of-the-art and offers guidance for future work.
AB - Numerous machine learning (ML), and more recently, deep-learning (DL)-based approaches, have been proposed to tackle scalability issues in electronic design automation, including those in integrated circuit (IC) test. This article examines state-of-the-art DL for IC test and highlights two critical unaddressed challenges. The first challenge involves identifying fit-for-purpose statistical metrics to train and evaluate ML model performance and usefulness in IC test. Our work shows that current metrics do not reflect how well ML models have learned to generalize and perform in the domain-specific context. From this insight, we propose and evaluate alternative metrics that better capture a model's likely usefulness in the IC test problem. The second challenge is to choose an appropriate input abstraction so as to enable an ML model to learn robust and reliable features. We investigate how well DL for IC test techniques generalize by exploring their robustness to perturbations that alter a netlist's structure but do not alter its functionality. This article provides insights into challenges via empirical evaluation of the state-of-the-art and offers guidance for future work.
KW - Adversarial perturbations
KW - Deep learning (DL)
KW - Machine learning (ML)
KW - VLSI testing
UR - http://www.scopus.com/inward/record.url?scp=85100467883&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85100467883&partnerID=8YFLogxK
U2 - 10.1109/TCAD.2021.3054808
DO - 10.1109/TCAD.2021.3054808
M3 - Article
AN - SCOPUS:85100467883
SN - 0278-0070
VL - 41
SP - 183
EP - 195
JO - IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
JF - IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
IS - 1
ER -