Abstract
Recent efforts to enhance computer-aided design (CAD) flows have seen the proliferation of machine learning (ML)-based techniques. However, despite achieving state-of-the-art performance in many domains, techniques, such as deep learning (DL) are susceptible to various adversarial attacks. In this work, we explore the threat posed by training data poisoning attacks where a malicious insider can try to insert backdoors into a deep neural network (DNN) used as part of the CAD flow. Using a case study on lithographic hotspot detection, we explore how an adversary can contaminate training data with specially crafted, yet meaningful, genuinely labeled, and design rule compliant poisoned clips. Our experiments show that very low poisoned/clean data ratio in training data is sufficient to backdoor the DNN; an adversary can 'hide' specific hotspot clips at inference time by including a backdoor trigger shape in the input with 100% success. This attack provides a novel way for adversaries to sabotage and disrupt the distributed design process. After finding that training data poisoning attacks are feasible and stealthy, we explore a potential ensemble defense against possible data contamination, showing promising attack success reduction. Our results raise fundamental questions about the robustness of DL-based systems in CAD, and we provide insights into the implications of these.
Original language | English (US) |
---|---|
Article number | 9200729 |
Pages (from-to) | 1244-1257 |
Number of pages | 14 |
Journal | IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems |
Volume | 40 |
Issue number | 6 |
DOIs | |
State | Published - Jun 2021 |
Keywords
- Computer aided design
- design for manufacture
- machine learning (ML)
- robustness
- security
ASJC Scopus subject areas
- Software
- Computer Graphics and Computer-Aided Design
- Electrical and Electronic Engineering