TY - GEN
T1 - Poster
T2 - 16th IEEE International Conference on Software Testing, Verification and Validation, ICST 2023
AU - Naseer, Mahum
AU - Shafique, Muhammad
N1 - Publisher Copyright:
© 2023 IEEE.
PY - 2023
Y1 - 2023
N2 - Owing to their remarkable learning (and relearning) capabilities, deep neural networks (DNNs) find use in numerous real-world applications. However, the learning of these data-driven machine learning models is generally as good as the data available to them for training. Hence, training datasets with long-tail distribution pose a challenge for DNNs, since the DNNs trained on them may provide a varying degree of classification performance across different output classes. While the overall bias of such networks is already highlighted in existing works, this work identifies the node bias that leads to a varying sensitivity of the nodes for different output classes. To the best of our knowledge, this is the first work highlighting this unique challenge in DNNs, discussing its probable causes, and providing open challenges for this new research direction. We support our reasoning using an empirical case study of the networks trained on a real-world dataset.
AB - Owing to their remarkable learning (and relearning) capabilities, deep neural networks (DNNs) find use in numerous real-world applications. However, the learning of these data-driven machine learning models is generally as good as the data available to them for training. Hence, training datasets with long-tail distribution pose a challenge for DNNs, since the DNNs trained on them may provide a varying degree of classification performance across different output classes. While the overall bias of such networks is already highlighted in existing works, this work identifies the node bias that leads to a varying sensitivity of the nodes for different output classes. To the best of our knowledge, this is the first work highlighting this unique challenge in DNNs, discussing its probable causes, and providing open challenges for this new research direction. We support our reasoning using an empirical case study of the networks trained on a real-world dataset.
KW - Bias
KW - Class-wise Performance
KW - Deep Neural Networks (DNNs)
KW - Input Sensitivity
KW - Robustness
UR - http://www.scopus.com/inward/record.url?scp=85161888535&partnerID=8YFLogxK
UR - http://www.scopus.com/inward/citedby.url?scp=85161888535&partnerID=8YFLogxK
U2 - 10.1109/ICST57152.2023.00054
DO - 10.1109/ICST57152.2023.00054
M3 - Conference contribution
AN - SCOPUS:85161888535
T3 - Proceedings - 2023 IEEE 16th International Conference on Software Testing, Verification and Validation, ICST 2023
SP - 474
EP - 477
BT - Proceedings - 2023 IEEE 16th International Conference on Software Testing, Verification and Validation, ICST 2023
PB - Institute of Electrical and Electronics Engineers Inc.
Y2 - 16 April 2023 through 20 April 2023
ER -