Abstract
Over the past years, numerous studies demonstrated the vulnerability of deep neural networks (DNNs) to make correct classifications in the presence of small noise. This motivated the formal analysis of DNNs to ensure that they delineate acceptable behavior. However, in the case that the DNN's behavior is unacceptable for the desired application, these qualitative approaches are ill equipped to determine the precise degree to which the DNN behaves unacceptably. We propose a novel quantitative DNN analysis framework, QuanDA, which not only checks whether the DNN delineates certain behavior but also provides the estimated probability of the DNN to delineate this particular behavior. Unlike the (few) available quantitative DNN analysis frameworks, QuanDA does not use any implicit assumptions on the probability distribution of the hidden nodes, which enables the framework to propagate close to real probability distributions of the hidden node values to each proceeding DNN layer. Furthermore, our framework leverages CUDA to parallelize the analysis, enabling high-speed GPU implementation for fast analysis. The applicability of the framework is demonstrated using the ACAS Xu benchmark, to provide reachability probability estimates for all network nodes. This paper also provides potential applications of QuanDA for the analysis of DNN safety properties.
Original language | English (US) |
---|---|
Article number | 95 |
Journal | ACM Transactions on Design Automation of Electronic Systems |
Volume | 28 |
Issue number | 6 |
DOIs | |
State | Published - Oct 16 2023 |
Keywords
- ACAS xu
- confidence interval
- GPU
- neural networks
- quantitative analysis
- safety properties
ASJC Scopus subject areas
- Computer Science Applications
- Computer Graphics and Computer-Aided Design
- Electrical and Electronic Engineering