BadNets: Evaluating Backdooring Attacks on Deep Neural Networks

Tianyu Gu, Kang Liu, Brendan Dolan-Gavitt, Siddharth Garg

Research output: Contribution to journalArticlepeer-review


Deep learning-based techniques have achieved state-of-the-art performance on a wide variety of recognition and classification tasks. However, these networks are typically computationally expensive to train, requiring weeks of computation on many GPUs; as a result, many users outsource the training procedure to the cloud or rely on pre-trained models that are then fine-tuned for a specific task. In this paper, we show that the outsourced training introduces new security risks: an adversary can create a maliciously trained network (a backdoored neural network, or a BadNet) that has the state-of-the-art performance on the user's training and validation samples but behaves badly on specific attacker-chosen inputs. We first explore the properties of BadNets in a toy example, by creating a backdoored handwritten digit classifier. Next, we demonstrate backdoors in a more realistic scenario by creating a U.S. street sign classifier that identifies stop signs as speed limits when a special sticker is added to the stop sign; we then show in addition that the backdoor in our U.S. street sign detector can persist even if the network is later retrained for another task and cause a drop in an accuracy of 25% on average when the backdoor trigger is present. These results demonstrate that backdoors in neural networks are both powerful and - because the behavior of neural networks is difficult to explicate - stealthy. This paper provides motivation for further research into techniques for verifying and inspecting neural networks, just as we have developed tools for verifying and debugging software.

Original languageEnglish (US)
Article number8685687
Pages (from-to)47230-47243
Number of pages14
JournalIEEE Access
StatePublished - 2019


  • Computer security
  • Machine learning
  • Neural networks

ASJC Scopus subject areas

  • General Computer Science
  • General Materials Science
  • General Engineering


Dive into the research topics of 'BadNets: Evaluating Backdooring Attacks on Deep Neural Networks'. Together they form a unique fingerprint.

Cite this