Survey on Backdoor Attacks on Deep Learning: Current Trends, Categorization, Applications, Research Challenges, and Future Prospects

Muhammad Abdullah Hanif, Nandish Chattopadhyay, Bassem Ouni, Muhammad Shafique

Research output: Contribution to journalReview articlepeer-review

Abstract

Deep Neural Networks (DNNs) have emerged as a prominent set of algorithms for complex real-world applications. However, state-of-the-art DNNs require a significant amount of data and computational resources to train and generalize well for real-world scenarios. This dependence of DNN training on a large amount of computational and memory resources has increased the use of Machine Learning as a Service (MLaaS) or third-party resources for training large models for complex applications. Specifically, the drift of the deep learning community towards self-supervised learning for learning better representations directly from large amounts of unlabeled data has amplified the computational and memory requirements for machine learning. On the one hand, the availability of MLaaS (or third-party resources) alleviates this issue. On the other hand, it opens up avenues for a new set of vulnerabilities, where an adversary (someone from a third party) can infect the model with malicious functionality that is triggered only with specific input patterns. Such attacks are usually referred to as Trojan or backdoor attacks and are very stealthy and hard to detect. In this paper, we highlight the complete attack surface that can be exploited to inject hidden malicious functionality (backdoors) in machine learning models. We classify the attacks into two major categories, i.e., poisoning attacks and non-poisoning attacks, and present state-of-the-art works related to each. Towards the end of the article, we highlight the limitations of existing techniques and cover some of the key challenges in developing stealthy and robust real-world backdoor attacks.

Original languageEnglish (US)
Pages (from-to)93190-93221
Number of pages32
JournalIEEE Access
Volume13
DOIs
StatePublished - 2025

Keywords

  • adversarial attacks
  • backdoor attacks
  • backdoor defenses
  • clean-label attacks
  • Deep learning
  • DNNs
  • dynamic
  • image classification
  • machine learning (ML)
  • neural networks
  • object detection
  • security

ASJC Scopus subject areas

  • General Computer Science
  • General Materials Science
  • General Engineering

Fingerprint

Dive into the research topics of 'Survey on Backdoor Attacks on Deep Learning: Current Trends, Categorization, Applications, Research Challenges, and Future Prospects'. Together they form a unique fingerprint.

Cite this