TY - GEN
T1 - QuSecNets
T2 - 25th IEEE International Symposium on On-Line Testing and Robust System Design, IOLTS 2019
AU - Khalid, Faiq
AU - Ali, Hassan
AU - Tariq, Hammad
AU - Hanif, Muhammad Abdullah
AU - Rehman, Semeen
AU - Ahmed, Rehan
AU - Shafique, Muhammad
N1 - Funding Information:
ACKNOWLEDGEMENT This work was partially supported by the Erasmus+ International Credit Mobility (KA107).
Publisher Copyright:
© 2019 IEEE.
Copyright:
Copyright 2020 Elsevier B.V., All rights reserved.
PY - 2019/7
Y1 - 2019/7
N2 - Adversarial examples have emerged as a significant threat to machine learning algorithms, especially to the convolutional neural networks (CNNs). In this paper, we propose two quantization-based defense mechanisms, Constant Quantization (CQ) and Trainable Quantization (TQ), to increase the robustness of CNNs against adversarial examples. CQ quantizes input pixel intensities based on a 'fixed' number of quantization levels, while in TQ, the quantization levels are 'iteratively learned during the training phase', thereby providing a stronger defense mechanism. We apply the proposed techniques on undefended CNNs against different state-of-the-art adversarial attacks from the open-source Cleverhans library. The experimental results demonstrate 50%-96% and 10%-50% increase in the classification accuracy of the perturbed images generated from the MNIST and the CIFAR-10 datasets, respectively, on commonly used CNN (Conv2D(64, 8×8)-Conv2D(128, 6×6)-Conv2D(128, 5×5) - Dense(10) - Softmax()) available in Cleverhans library.
AB - Adversarial examples have emerged as a significant threat to machine learning algorithms, especially to the convolutional neural networks (CNNs). In this paper, we propose two quantization-based defense mechanisms, Constant Quantization (CQ) and Trainable Quantization (TQ), to increase the robustness of CNNs against adversarial examples. CQ quantizes input pixel intensities based on a 'fixed' number of quantization levels, while in TQ, the quantization levels are 'iteratively learned during the training phase', thereby providing a stronger defense mechanism. We apply the proposed techniques on undefended CNNs against different state-of-the-art adversarial attacks from the open-source Cleverhans library. The experimental results demonstrate 50%-96% and 10%-50% increase in the classification accuracy of the perturbed images generated from the MNIST and the CIFAR-10 datasets, respectively, on commonly used CNN (Conv2D(64, 8×8)-Conv2D(128, 6×6)-Conv2D(128, 5×5) - Dense(10) - Softmax()) available in Cleverhans library.
KW - Adversarial Attacks
KW - Adversarial Machine Learning
KW - Classification
KW - CNN
KW - Convolutional Neural Networks
KW - Defense
KW - DNN
KW - Machine Learning
KW - Quantization
KW - Security
KW - Trainable Quantization
UR - http://www.scopus.com/inward/record.url?scp=85072988187&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85072988187&partnerID=8YFLogxK
U2 - 10.1109/IOLTS.2019.8854377
DO - 10.1109/IOLTS.2019.8854377
M3 - Conference contribution
AN - SCOPUS:85072988187
T3 - 2019 IEEE 25th International Symposium on On-Line Testing and Robust System Design, IOLTS 2019
SP - 182
EP - 187
BT - 2019 IEEE 25th International Symposium on On-Line Testing and Robust System Design, IOLTS 2019
A2 - Gizopoulos, Dimitris
A2 - Alexandrescu, Dan
A2 - Papavramidou, Panagiota
A2 - Maniatakos, Michail
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 1 July 2019 through 3 July 2019
ER -