TY - GEN
T1 - Poisoning the (Data) Well in ML-Based CAD
T2 - 2020 Design, Automation and Test in Europe Conference and Exhibition, DATE 2020
AU - Liu, Kang
AU - Tan, Benjamin
AU - Karri, Ramesh
AU - Garg, Siddharth
N1 - Publisher Copyright:
© 2020 EDAA.
PY - 2020/3
Y1 - 2020/3
N2 - Machine learning (ML) provides state-of-the-art performance in many parts of computer-aided design (CAD) flows. However, deep neural networks (DNNs) are susceptible to various adversarial attacks, including data poisoning to compromise training to insert backdoors. Sensitivity to training data integrity presents a security vulnerability, especially in light of malicious insiders who want to cause targeted neural network misbehavior. In this study, we explore this threat in lithographic hotspot detection via training data poisoning, where hotspots in a layout clip can be hidden at inference time by including a trigger shape in the input. We show that training data poisoning attacks are feasible and stealthy, demonstrating a backdoored neural network that performs normally on clean inputs but misbehaves on inputs when a backdoor trigger is present. Furthermore, our results raise some fundamental questions about the robustness of ML-based systems in CAD.
AB - Machine learning (ML) provides state-of-the-art performance in many parts of computer-aided design (CAD) flows. However, deep neural networks (DNNs) are susceptible to various adversarial attacks, including data poisoning to compromise training to insert backdoors. Sensitivity to training data integrity presents a security vulnerability, especially in light of malicious insiders who want to cause targeted neural network misbehavior. In this study, we explore this threat in lithographic hotspot detection via training data poisoning, where hotspots in a layout clip can be hidden at inference time by including a trigger shape in the input. We show that training data poisoning attacks are feasible and stealthy, demonstrating a backdoored neural network that performs normally on clean inputs but misbehaves on inputs when a backdoor trigger is present. Furthermore, our results raise some fundamental questions about the robustness of ML-based systems in CAD.
UR - http://www.scopus.com/inward/record.url?scp=85087416366&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85087416366&partnerID=8YFLogxK
U2 - 10.23919/DATE48585.2020.9116489
DO - 10.23919/DATE48585.2020.9116489
M3 - Conference contribution
AN - SCOPUS:85087416366
T3 - Proceedings of the 2020 Design, Automation and Test in Europe Conference and Exhibition, DATE 2020
SP - 306
EP - 309
BT - Proceedings of the 2020 Design, Automation and Test in Europe Conference and Exhibition, DATE 2020
A2 - Di Natale, Giorgio
A2 - Bolchini, Cristiana
A2 - Vatajelu, Elena-Ioana
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 9 March 2020 through 13 March 2020
ER -