Safety and robustness properties are highly required for neural networks deployed in safety-critical applications. Current complete verification techniques of these properties suffer from the lack of efficiency and effectiveness. In… Click to show full abstract
Safety and robustness properties are highly required for neural networks deployed in safety-critical applications. Current complete verification techniques of these properties suffer from the lack of efficiency and effectiveness. In this article, we present an efficient complete approach to verify safety and robustness properties of neural networks through incrementally determinizing activation states of neurons. The key idea is to generate constraints via layerwised splitting that make activation states of hidden neurons become deterministic efficiently. These constraints are then utilized for refining inputs systematically so that abstract analysis over the refined input can be more precise. Our approach decomposes a verification problem into a set of subproblems via layerwised input space splitting. The property is then checked on each subproblem, where the activation states of at least one hidden neurons will be determinized. Further checking is accelerated by constraint-guided input refinement. We have implemented a parallel tool called LayerSAR to verify safety and robustness properties of ReLU neural networks in a sound and complete way, and evaluated it extensively on several benchmark sets. Experimental results show that our approach is promising, compared with complete tools, such as Planet, Neurify, Marabou, ERAN, Venus, Venus2, and nnenum in verifying safety and robustness properties on the benchmarks.
               
Click one of the above tabs to view related content.