TY - GEN
T1 - Investigating Spatially Correlated Patterns in Adversarial Images
AU - Chattopadhyay, Nandish
AU - Zhi, Lionell Yip En
AU - Xing, Bryan Tan
AU - Chattopadhyay, Anupam
AU - Shafique, Muhammad
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Adversarial attacks have proved to be the major impediment in the progress on research towards reliable machine learning solutions. Carefully crafted perturbations, imperceptible to human vision, can be added to images to force misclassification by an otherwise high performing neural network. To have a better understanding of the key contributors of such structured attacks, we searched for and studied spatially co-located patterns in the distribution of pixels in the input space. In this paper, we propose a framework for segregating and isolating regions within an input image which are particularly critical towards either classification (during inference), or adversarial vulnerability or both. We assert that during inference, the trained model looks at a specific region in the image, which we call Region of Importance (RoI); and the attacker looks at a region to alter/modify, which we call Region of Attack (RoA). The success of this approach could also be used to design a post-hoc adversarial defence method, as illustrated by our observations. This uses the notion of blocking out (we call neutralizing) that region of the image which is highly vulnerable to adversarial attacks but is not important for the task of classification. We establish the theoretical setup for formalising the process of segregation, isolation and neutralization and substantiate it through empirical analysis on standard benchmarking datasets. The findings strongly indicate that mapping features into the input space preserves the significant patterns typically observed in the feature-space while adding major interpretability and therefore simplifies potential defensive mechanisms.
AB - Adversarial attacks have proved to be the major impediment in the progress on research towards reliable machine learning solutions. Carefully crafted perturbations, imperceptible to human vision, can be added to images to force misclassification by an otherwise high performing neural network. To have a better understanding of the key contributors of such structured attacks, we searched for and studied spatially co-located patterns in the distribution of pixels in the input space. In this paper, we propose a framework for segregating and isolating regions within an input image which are particularly critical towards either classification (during inference), or adversarial vulnerability or both. We assert that during inference, the trained model looks at a specific region in the image, which we call Region of Importance (RoI); and the attacker looks at a region to alter/modify, which we call Region of Attack (RoA). The success of this approach could also be used to design a post-hoc adversarial defence method, as illustrated by our observations. This uses the notion of blocking out (we call neutralizing) that region of the image which is highly vulnerable to adversarial attacks but is not important for the task of classification. We establish the theoretical setup for formalising the process of segregation, isolation and neutralization and substantiate it through empirical analysis on standard benchmarking datasets. The findings strongly indicate that mapping features into the input space preserves the significant patterns typically observed in the feature-space while adding major interpretability and therefore simplifies potential defensive mechanisms.
KW - Adversarial attacks
KW - Deep learning
KW - Neural networks
KW - spatial correlation
UR - http://www.scopus.com/inward/record.url?scp=85214648185&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85214648185&partnerID=8YFLogxK
U2 - 10.1109/ICIPCW64161.2024.10769132
DO - 10.1109/ICIPCW64161.2024.10769132
M3 - Conference contribution
AN - SCOPUS:85214648185
T3 - 2024 IEEE International Conference on Image Processing Challenges and Workshops, ICIPCW 2024 - Proceedings
SP - 4058
EP - 4064
BT - 2024 IEEE International Conference on Image Processing Challenges and Workshops, ICIPCW 2024 - Proceedings
PB - Institute of Electrical and Electronics Engineers Inc.
T2 - 31st IEEE International Conference on Image Processing Challenges and Workshops, ICIPCW 2024
Y2 - 27 October 2024 through 30 October 2024
ER -