TY - GEN
T1 - FANNet
T2 - 2020 Design, Automation and Test in Europe Conference and Exhibition, DATE 2020
AU - Naseer, Mahum
AU - Minhas, Mishal Fatima
AU - Khalid, Faiq
AU - Hanif, Muhammad Abdullah
AU - Hasan, Osman
AU - Shafique, Muhammad
N1 - Funding Information:
This work was partially supported by Doctoral College Resilient Embedded Systems which is run jointly by TU Wien’s Faculty of Informatics and FH-Technikum Wien, and partially supported by the Erasmus+ International Credit Mobility (KA107).
Publisher Copyright:
© 2020 EDAA.
Copyright:
Copyright 2020 Elsevier B.V., All rights reserved.
PY - 2020/3
Y1 - 2020/3
N2 - With a constant improvement in the network architectures and training methodologies, Neural Networks (NNs) are increasingly being deployed in real-world Machine Learning systems. However, despite their impressive performance on known inputs, these NNs can fail absurdly on the unseen inputs, especially if these real-time inputs deviate from the training dataset distributions, or contain certain types of input noise. This indicates the low noise tolerance of NNs, which is a major reason for the recent increase of adversarial attacks. This is a serious concern, particularly for safety-critical applications, where inaccurate results lead to dire consequences. We propose a novel methodology that leverages model checking for the Formal Analysis of Neural Network (FANNet) under different input noise ranges. Our methodology allows us to rigorously analyze the noise tolerance of NNs, their input node sensitivity, and the effects of training bias on their performance, e.g., in terms of classification accuracy. For evaluation, we use a feed-forward fully-connected NN architecture trained for the Leukemia classification. Our experimental results show ±11% noise tolerance for the given trained network, identify the most sensitive input nodes, and confirm the biasness of the available training dataset.
AB - With a constant improvement in the network architectures and training methodologies, Neural Networks (NNs) are increasingly being deployed in real-world Machine Learning systems. However, despite their impressive performance on known inputs, these NNs can fail absurdly on the unseen inputs, especially if these real-time inputs deviate from the training dataset distributions, or contain certain types of input noise. This indicates the low noise tolerance of NNs, which is a major reason for the recent increase of adversarial attacks. This is a serious concern, particularly for safety-critical applications, where inaccurate results lead to dire consequences. We propose a novel methodology that leverages model checking for the Formal Analysis of Neural Network (FANNet) under different input noise ranges. Our methodology allows us to rigorously analyze the noise tolerance of NNs, their input node sensitivity, and the effects of training bias on their performance, e.g., in terms of classification accuracy. For evaluation, we use a feed-forward fully-connected NN architecture trained for the Leukemia classification. Our experimental results show ±11% noise tolerance for the given trained network, identify the most sensitive input nodes, and confirm the biasness of the available training dataset.
KW - Adversarial Machine Learning
KW - Formal Analysis
KW - Formal Methods
KW - Model Checking
KW - Neural Networks
UR - http://www.scopus.com/inward/record.url?scp=85087423296&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85087423296&partnerID=8YFLogxK
U2 - 10.23919/DATE48585.2020.9116247
DO - 10.23919/DATE48585.2020.9116247
M3 - Conference contribution
AN - SCOPUS:85087423296
T3 - Proceedings of the 2020 Design, Automation and Test in Europe Conference and Exhibition, DATE 2020
SP - 666
EP - 669
BT - Proceedings of the 2020 Design, Automation and Test in Europe Conference and Exhibition, DATE 2020
A2 - Di Natale, Giorgio
A2 - Bolchini, Cristiana
A2 - Vatajelu, Elena-Ioana
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 9 March 2020 through 13 March 2020
ER -