TY - GEN
T1 - Security for machine learning-based systems
T2 - 16th International Conference on Frontiers of Information Technology, FIT 2018
AU - Khalid, Faiq
AU - Hanif, Muhammad Abdullah
AU - Rehman, Semeen
AU - Shafique, Muhammad
N1 - Publisher Copyright:
© 2018 IEEE.
Copyright:
Copyright 2019 Elsevier B.V., All rights reserved.
PY - 2018/7/2
Y1 - 2018/7/2
N2 - The exponential increase in dependencies between the cyber and physical world leads to an enormous amount of data which must be efficiently processed and stored. Therefore, computing paradigms are evolving towards machine learning (ML)-based systems because of their ability to efficiently and accurately process the enormous amount of data. Although ML-based solutions address the efficient computing requirements of big data, they introduce security vulnerabilities into the systems, which cannot be addressed by traditional monitoring-based security measures. Therefore, this paper first presents a brief overview of various security threats in machine learning, their respective threat models and associated research challenges to develop robust security measures. To illustrate the security vulnerabilities of ML during training, inferencing and hardware implementation, we demonstrate some key security threats on ML using LeNet and VGGNet for MNIST and German Traffic Sign Recognition Benchmarks (GTSRB). Moreover, based on the security analysis of ML-Training, we also propose an attack that has very less impact on the inference accuracy. Towards the end, we highlight the associated research challenges in developing security measures and provide a brief overview of the techniques used to mitigate such security threats.
AB - The exponential increase in dependencies between the cyber and physical world leads to an enormous amount of data which must be efficiently processed and stored. Therefore, computing paradigms are evolving towards machine learning (ML)-based systems because of their ability to efficiently and accurately process the enormous amount of data. Although ML-based solutions address the efficient computing requirements of big data, they introduce security vulnerabilities into the systems, which cannot be addressed by traditional monitoring-based security measures. Therefore, this paper first presents a brief overview of various security threats in machine learning, their respective threat models and associated research challenges to develop robust security measures. To illustrate the security vulnerabilities of ML during training, inferencing and hardware implementation, we demonstrate some key security threats on ML using LeNet and VGGNet for MNIST and German Traffic Sign Recognition Benchmarks (GTSRB). Moreover, based on the security analysis of ML-Training, we also propose an attack that has very less impact on the inference accuracy. Towards the end, we highlight the associated research challenges in developing security measures and provide a brief overview of the techniques used to mitigate such security threats.
KW - Attack Surface
KW - Attacks
KW - Autonomous Vehicle
KW - Deep Learning
KW - DNNs
KW - Machine Learning
KW - Neural Networks
KW - Security
KW - Traffic Sign Detection
UR - http://www.scopus.com/inward/record.url?scp=85062421529&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85062421529&partnerID=8YFLogxK
U2 - 10.1109/FIT.2018.00064
DO - 10.1109/FIT.2018.00064
M3 - Conference contribution
AN - SCOPUS:85062421529
T3 - Proceedings - 2018 International Conference on Frontiers of Information Technology, FIT 2018
SP - 327
EP - 332
BT - Proceedings - 2018 International Conference on Frontiers of Information Technology, FIT 2018
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 17 December 2018 through 19 December 2018
ER -