Abstract
Adversarial attacks that are possible in natural images are also transferable to medical images, paralyzing the diagnostic process and threatening the robustness of underlying Convolutional Neural Network (CNN) based classifiers. In this work, we have first demonstrated the effectiveness of well-known natural image adversarial attacks such as FGSM and PGD on Malaria cell images. Afterwards, we propose a novel defense methodology, namely FRNet, that leverages well-established features such as HOG, LBP, KAZE, and SIFT that are able to detect edges and objects while they remain robust against imperceptible adversarial perturbations. The method utilizes an MLP to efficiently concatenate the features to FRNet making it convenient and resulting in an architecturally neural and attack generic methodology. Our experimental results demonstrate that when applying FRNet on different CNN architectures such as simple CNN, EfficientNet, and MobileNet, it decreases the impact of adversarial attacks by as much as 67% compared to the corresponding base models.
Original language | English (US) |
---|---|
Pages (from-to) | 26943-26956 |
Number of pages | 14 |
Journal | IEEE Access |
Volume | 12 |
DOIs | |
State | Published - 2024 |
Keywords
- Adversarial attack
- defense mechanism
- malaria cell images
- medical images
- neural network
ASJC Scopus subject areas
- General Computer Science
- General Materials Science
- General Engineering