TY - GEN
T1 - TrISec
T2 - 25th IEEE International Symposium on On-Line Testing and Robust System Design, IOLTS 2019
AU - Khalid, Faiq
AU - Hanif, Muhammad Abdullah
AU - Rehman, Semeen
AU - Ahmed, Rehan
AU - Shafique, Muhammad
N1 - Funding Information:
This work was partially supported by the Erasmus+ International Credit Mobility (KA107).
Publisher Copyright:
© 2019 IEEE.
Copyright:
Copyright 2020 Elsevier B.V., All rights reserved.
PY - 2019/7
Y1 - 2019/7
N2 - Most of the data manipulation attacks on deep neural networks (DNNs) during the training stage introduce a perceptible noise that can be catered by preprocessing during inference, or can be identified during the validation phase. There-fore, data poisoning attacks during inference (e.g., adversarial attacks) are becoming more popular. However, many of them do not consider the imperceptibility factor in their optimization algorithms, and can be detected by correlation and structural similarity analysis, or noticeable (e.g., by humans) in multi-level security system. Moreover, majority of the inference attack rely on some knowledge about the training dataset. In this paper, we propose a novel methodology which automatically generates imperceptible attack images by using the back-propagation algorithm on pre-trained DNNs, without requiring any information about the training dataset (i.e., completely training data-unaware). We present a case study on traffic sign detection using the VGGNet trained on the German Traffic Sign Recognition Benchmarks dataset in an autonomous driving use case. Our results demonstrate that the generated attack images successfully perform misclassification while remaining imperceptible in both 'subjective' and 'objective' quality tests.
AB - Most of the data manipulation attacks on deep neural networks (DNNs) during the training stage introduce a perceptible noise that can be catered by preprocessing during inference, or can be identified during the validation phase. There-fore, data poisoning attacks during inference (e.g., adversarial attacks) are becoming more popular. However, many of them do not consider the imperceptibility factor in their optimization algorithms, and can be detected by correlation and structural similarity analysis, or noticeable (e.g., by humans) in multi-level security system. Moreover, majority of the inference attack rely on some knowledge about the training dataset. In this paper, we propose a novel methodology which automatically generates imperceptible attack images by using the back-propagation algorithm on pre-trained DNNs, without requiring any information about the training dataset (i.e., completely training data-unaware). We present a case study on traffic sign detection using the VGGNet trained on the German Traffic Sign Recognition Benchmarks dataset in an autonomous driving use case. Our results demonstrate that the generated attack images successfully perform misclassification while remaining imperceptible in both 'subjective' and 'objective' quality tests.
KW - Adversarial Machine Learning
KW - Data Poisoning Attacks
KW - Deep Neural Network
KW - DNNs
KW - Imperceptible Attack Noise
KW - Machine Learning
KW - ML Security
UR - http://www.scopus.com/inward/record.url?scp=85072961359&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85072961359&partnerID=8YFLogxK
U2 - 10.1109/IOLTS.2019.8854425
DO - 10.1109/IOLTS.2019.8854425
M3 - Conference contribution
AN - SCOPUS:85072961359
T3 - 2019 IEEE 25th International Symposium on On-Line Testing and Robust System Design, IOLTS 2019
SP - 188
EP - 193
BT - 2019 IEEE 25th International Symposium on On-Line Testing and Robust System Design, IOLTS 2019
A2 - Gizopoulos, Dimitris
A2 - Alexandrescu, Dan
A2 - Papavramidou, Panagiota
A2 - Maniatakos, Michail
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 1 July 2019 through 3 July 2019
ER -