Numerous machine learning (ML), and more recently, deep learning (DL) based approaches, have been proposed to tackle scalability issues in electronic design automation, including those in integrated circuit (IC) test. This paper examines state-of-the-art DL for IC test and highlights two critical unaddressed challenges. The first challenge involves identifying fit-for-purpose statistical metrics to train and evaluate ML model performance and usefulness in IC test. Our work shows that current metrics do reflect how well ML models have learned to generalize and perform in the domain-specific context. From this insight, we propose and evaluate alternative metrics that better capture a model’s likely usefulness in the IC test problem. The second challenge is to choose an appropriate input abstraction so as to enable an ML model to learn robust and reliable features. We investigate how well DL for IC test techniques generalize by exploring their robustness to perturbations that alter a netlist’s structure but do not alter its functionality. This paper provides insights into challenges via empirical evaluation of the state-of-the-art and offers guidance for future work.
|Original language||English (US)|
|Journal||IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems|
|State||Accepted/In press - 2021|
- Adversarial Perturbations
- Deep Learning
- Integrated circuit modeling
- Integrated circuits
- Machine Learning
- Solid modeling
- VLSI Testing.
ASJC Scopus subject areas
- Computer Graphics and Computer-Aided Design
- Electrical and Electronic Engineering