TY - GEN
T1 - Anomaly Unveiled
T2 - 31st IEEE International Conference on Image Processing, ICIP 2024
AU - Chattopadhyay, Nandish
AU - Guesmi, Amira
AU - Shafique, Muhammad
N1 - Publisher Copyright:
© 2024 IEEE.
PY - 2024
Y1 - 2024
N2 - Adversarial patch attacks pose a significant threat to the practical deployment of deep learning systems. However, existing research primarily focuses on image pre-processing defenses, which often result in reduced classification accuracy for clean images and fail to effectively counter physically feasible attacks. In this paper, we investigate the behavior of adversarial patches as anomalies within the distribution of image information and leverage this insight to develop a robust defense strategy. Our proposed defense mechanism utilizes a clustering-based technique called DBSCAN to isolate anomalous image segments, which is carried out by a three-stage pipeline consisting of Segmenting, Isolating, and Blocking phases to identify and mitigate adversarial noise. Upon identifying adversarial components, we neutralize them by replacing them with the mean pixel value, surpassing alternative replacement options. Our model-agnostic defense mechanism is evaluated across multiple models and datasets, demonstrating its effectiveness in countering various adversarial patch attacks in image classification tasks. Our proposed approach significantly improves accuracy, increasing from 38.8% without the defense to 67.1% with the defense against LaVAN and GoogleAp attacks, surpassing prominent state-of-the-art methods such as LGS [1] (53.86%) and Jujutsu [2] (60%).
AB - Adversarial patch attacks pose a significant threat to the practical deployment of deep learning systems. However, existing research primarily focuses on image pre-processing defenses, which often result in reduced classification accuracy for clean images and fail to effectively counter physically feasible attacks. In this paper, we investigate the behavior of adversarial patches as anomalies within the distribution of image information and leverage this insight to develop a robust defense strategy. Our proposed defense mechanism utilizes a clustering-based technique called DBSCAN to isolate anomalous image segments, which is carried out by a three-stage pipeline consisting of Segmenting, Isolating, and Blocking phases to identify and mitigate adversarial noise. Upon identifying adversarial components, we neutralize them by replacing them with the mean pixel value, surpassing alternative replacement options. Our model-agnostic defense mechanism is evaluated across multiple models and datasets, demonstrating its effectiveness in countering various adversarial patch attacks in image classification tasks. Our proposed approach significantly improves accuracy, increasing from 38.8% without the defense to 67.1% with the defense against LaVAN and GoogleAp attacks, surpassing prominent state-of-the-art methods such as LGS [1] (53.86%) and Jujutsu [2] (60%).
KW - adversarial defense
KW - Adversarial patch
KW - anomaly detection
KW - clustering
KW - defense pipeline
UR - http://www.scopus.com/inward/record.url?scp=85216883029&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85216883029&partnerID=8YFLogxK
U2 - 10.1109/ICIP51287.2024.10648223
DO - 10.1109/ICIP51287.2024.10648223
M3 - Conference contribution
AN - SCOPUS:85216883029
T3 - Proceedings - International Conference on Image Processing, ICIP
SP - 929
EP - 935
BT - 2024 IEEE International Conference on Image Processing, ICIP 2024 - Proceedings
PB - IEEE Computer Society
Y2 - 27 October 2024 through 30 October 2024
ER -