Bias Busters: Robustifying DL-Based Lithographic Hotspot Detectors against Backdooring Attacks

Kang Liu, Benjamin Tan, Gaurav Rajavendra Reddy, Siddharth Garg, Yiorgos Makris, Ramesh Karri

Research output: Contribution to journalArticlepeer-review


Deep learning (DL) offers potential improvements throughout the CAD tool-flow, one promising application being lithographic hotspot detection. However, DL techniques have been shown to be especially vulnerable to inference and training time adversarial attacks. Recent work has demonstrated that a small fraction of malicious physical designers can stealthily 'backdoor' a DL-based hotspot detector during its training phase such that it accurately classifies regular layout clips but predicts hotspots containing a specially crafted trigger shape as nonhotspots. We propose a novel training data augmentation strategy as a powerful defense against such backdooring attacks. The defense works by eliminating the intentional biases introduced in the training data but does not require knowledge of which training samples are poisoned or the nature of the backdoor trigger. Our results show that the defense can drastically reduce the attack success rate from 84% to 0%.

Original languageEnglish (US)
Pages (from-to)2077-2089
Number of pages13
JournalIEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems
Issue number10
StatePublished - Oct 2021


  • Defense
  • electronic design automation (EDA)
  • machine learning (ML)
  • robustness
  • security

ASJC Scopus subject areas

  • Software
  • Computer Graphics and Computer-Aided Design
  • Electrical and Electronic Engineering


Dive into the research topics of 'Bias Busters: Robustifying DL-Based Lithographic Hotspot Detectors against Backdooring Attacks'. Together they form a unique fingerprint.

Cite this