Adversarial ML for DNNs, CapsNets, and SNNs at the Edge

Alberto Marchisio, Muhammad Abdullah Hanif, Muhammad Shafique

Research output: Chapter in Book/Report/Conference proceedingChapter


Recent studies have shown that Machine Learning (ML) algorithm suffers from several vulnerability threats. Among them, adversarial attacks represent one of the most critical issues. This chapter provides an overview of the ML vulnerability challenges, with a focus on the security threats for Deep Neural Networks, Capsule Networks, and Spiking Neural Networks. Moreover, it discusses the current trends and outlooks on the methodologies for enhancing the ML models’ robustness.

Original languageEnglish (US)
Title of host publicationEmbedded Machine Learning for Cyber-Physical, IoT, and Edge Computing
Subtitle of host publicationUse Cases and Emerging Challenges
PublisherSpringer Nature
Number of pages34
ISBN (Electronic)9783031406775
ISBN (Print)9783031406768
StatePublished - Jan 1 2023


  • Adversarial attacks
  • Capsule Networks
  • Deep Neural Networks
  • Machine learning security
  • Robustness
  • Spiking Neural Networks

ASJC Scopus subject areas

  • General Computer Science
  • General Engineering
  • General Social Sciences


Dive into the research topics of 'Adversarial ML for DNNs, CapsNets, and SNNs at the Edge'. Together they form a unique fingerprint.

Cite this