TY - JOUR
T1 - SeVuc
T2 - A study on the Security Vulnerabilities of Capsule Networks against adversarial attacks
AU - Marchisio, Alberto
AU - Nanfa, Giorgio
AU - Khalid, Faiq
AU - Hanif, Muhammad Abdullah
AU - Martina, Maurizio
AU - Shafique, Muhammad
N1 - Funding Information:
This work has been supported in part by the Doctoral College Resilient Embedded Systems , which is run jointly by the TU Wien’s Faculty of Informatics and the UAS Technikum Wien. This work was also jointly supported by the NYUAD Center for Interacting Urban Networks (CITIES) , funded by Tamkeen under the NYUAD Research Institute Award CG001 and by the Swiss Re Institute under the Quantum Cities™ initiative, and Center for CyberSecurity (CCS) , funded by Tamkeen under the NYUAD Research Institute Award G1104 . The authors acknowledge TU Wien Bibliothek for financial support through its Open Access Funding Programme.
Funding Information:
This work has been supported in part by the Doctoral College Resilient Embedded Systems, which is run jointly by the TU Wien's Faculty of Informatics and the UAS Technikum Wien. This work was also jointly supported by the NYUAD Center for Interacting Urban Networks (CITIES), funded by Tamkeen under the NYUAD Research Institute Award CG001 and by the Swiss Re Institute under the Quantum Cities™ initiative, and Center for CyberSecurity (CCS), funded by Tamkeen under the NYUAD Research Institute Award G1104. The authors acknowledge TU Wien Bibliothek for financial support through its Open Access Funding Programme.
Publisher Copyright:
© 2022 The Author(s)
PY - 2023/2
Y1 - 2023/2
N2 - Capsule Networks (CapsNets) preserve the hierarchical spatial relationships between objects, and thereby bear the potential to surpass the performance of traditional Convolutional Neural Networks (CNNs) in performing tasks like image classification. This makes CapsNets suitable for the smart cyber–physical systems (CPS), where a large amount of training data may not be available. A large body of work has explored adversarial examples for CNNs, but their effectiveness on CapsNets has not yet been studied systematically. In our work, we perform an analysis to study the vulnerabilities in CapsNets to adversarial attacks. These perturbations, added to the test inputs, are small and imperceptible to humans, but can fool the network to mispredict. We propose a greedy algorithm to automatically generate imperceptible adversarial examples in a black-box attack scenario. We show that this kind of attacks, when applied to the German Traffic Sign Recognition Benchmark and CIFAR10 datasets, mislead CapsNets in making a correct classification, which can be catastrophic for smart CPS, like autonomous vehicles. Moreover, we apply the same kind of adversarial attacks to a 5-layer CNN (LeNet), to a 9-layer CNN (VGGNet), and to a 20-layer CNN (ResNet), and analyze the outcome, compared to the CapsNets, to study their different behaviors under the adversarial attacks.
AB - Capsule Networks (CapsNets) preserve the hierarchical spatial relationships between objects, and thereby bear the potential to surpass the performance of traditional Convolutional Neural Networks (CNNs) in performing tasks like image classification. This makes CapsNets suitable for the smart cyber–physical systems (CPS), where a large amount of training data may not be available. A large body of work has explored adversarial examples for CNNs, but their effectiveness on CapsNets has not yet been studied systematically. In our work, we perform an analysis to study the vulnerabilities in CapsNets to adversarial attacks. These perturbations, added to the test inputs, are small and imperceptible to humans, but can fool the network to mispredict. We propose a greedy algorithm to automatically generate imperceptible adversarial examples in a black-box attack scenario. We show that this kind of attacks, when applied to the German Traffic Sign Recognition Benchmark and CIFAR10 datasets, mislead CapsNets in making a correct classification, which can be catastrophic for smart CPS, like autonomous vehicles. Moreover, we apply the same kind of adversarial attacks to a 5-layer CNN (LeNet), to a 9-layer CNN (VGGNet), and to a 20-layer CNN (ResNet), and analyze the outcome, compared to the CapsNets, to study their different behaviors under the adversarial attacks.
KW - Adversarial attacks
KW - Affine transformations
KW - Architecture
KW - Artificial intelligence
KW - Capsule Networks
KW - Convolutional neural networks
KW - Deep learning
KW - Deep neural networks
KW - Machine learning
KW - Robustness
KW - Security
KW - Vulnerability
UR - http://www.scopus.com/inward/record.url?scp=85144538875&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85144538875&partnerID=8YFLogxK
U2 - 10.1016/j.micpro.2022.104738
DO - 10.1016/j.micpro.2022.104738
M3 - Article
AN - SCOPUS:85144538875
SN - 0141-9331
VL - 96
JO - Microprocessors and Microsystems
JF - Microprocessors and Microsystems
M1 - 104738
ER -