Verifying the Correctness of Deep Neural Networks

18/10/2018 - 12:00
Speaker: 
Seminar: 
מיקום: 
Abstract: 

Deep neural networks have emerged as an effective means for tackling complex, real-world problems. However, a major obstacle in applying them to safety-critical systems is the great difficulty in providing formal guarantees about their behavior. We present an efficient technique for verifying properties of deep neural networks (or providing counter-examples). The technique can also be used to measure a network's robustness to "adversarial inputs" - slight perturbations to a network's input that cause it to err. Our approach is based on the simplex method, extended to handle piecewise-linear activation functions, which are a crucial ingredient in many modern neural networks. The verification procedure tackles neural networks as a whole, without making any simplifying assumptions. We evaluated our technique on a deep neural network implementation of the next-generation Airborne Collision Avoidance System for unmanned aircraft (ACAS Xu), proving various properties about them and in one case identifying incorrect behavior.

[Based on joint work with Clark Barrett, David Dill, Kyle Julian and Mykel Kochenderfer]

 

Bio:

Guy Katz is a senior lecturer at the Hebrew University of Jerusalem, Israel. He received his Ph.D. at the Weizmann Institute of Science in 2015. His research interests lie at the intersection between Formal Methods and Software Engineering, and in particular in the application of formal methods to software systems with components generated via machine learning.