ReluPlex Summary

Summary: Reluplex: An Efficient SMT Solver for Verifying Deep Neural Networks

Author Info

Guy Katz’s Home Page
Google Scholar

Your Info

Zikang Xiong’s Home Page

Overview

Motivation

Deep Neural Network(DNN) is an effective way for complex real-world problems. However, to guarantee DNN can be applied to the safety-critical system, formal guarantees about their behaviors are needed.

An Abstraction-Refinement Approach to Verification of Artificial Neural Networks

This work was published on the CAV(2010). The challenge of verifying neural network comes from the non-linear activation functions, which make the verification problem become non-convex. In this work, the activation function is sigmoid. They had proposed a method that replaces activation functions with piecewise linear approximations. However, this made the computation increase significantly. As a result, this work is not scalable, only supports a very small network with at most 20 neurons.

Measuring Neural Net Robustness with Constraints

This work comes from NIPS(2016). For verifying the robustness property of the neural network, they just focused on the small neighborhood of input, which fixed the state of ReLUs.

X. Huang, etc. Safety Verification of Deep Neural Networks, 2016. Technical Report.

This work also focuses on robustness property. They have reduced the infinite neighborhood into a finite set of points and check the labels of these points is consistent. Then, this process is propagated through the network layer by layer.

The limitation of this work is that they made a strong assumption. That is the finite sets correctly represent their infinite domains.

A Formally Verified Hybrid System for the Next-Generation Airborne Collision Avoidance System.

This work proposed a hybrid technique of analysis lookup-table and DNNs.

AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation

This work was published on the 2018 IEEE Symposium on Security and Privacy. This work is based on overapproximation and they have proved that AI2 is sound.
The activation function of this work is also ReLU. However, this work can support the convolutional neural network as well as max pooling layer.

Besides, this work is more scalable than Reluplex, which means that it can verify the neural network with much more neurons.

Contributions

  • This is a pioneer work on neural network verification.
  • They have proposed an effective way to encode the non-linear ReLU activate function.
  • They have improved the Simplex method, which is used to solve linear arithmetic theory to solve the problem with non-linear ReLu activate function.
  • They have proved that the Reluplex is complete and sound.
  • Several efficient implementation suggestions were proposed.

Details

Difficulty for Verifying Neural Network

The difficulty for verifying neural network comes from the non-linear activation functions, which makes the verification problem become non-convex. Without the activation function, it will be easy to encode such verification problem into linear arithmetic theory and solve with SMT and LP solvers.

One naive way to solve this problem is that encoding the ReLU with disjunctions. However, this may cause exponential growing on sub-problem.

Reluplex Procedure

To address this problem, based on simplex, an LP solver, they have proposed a new procedure – Reluplex.

In the Simplex procedure, the variables strictly subject to their bounds. However, in the Reluplex, variables can temporarily violate their bounds, and sometimes even the member of ReLU pairs can temporarily violate the ReLU semantics. This is because they are trying to guess the sate of ReLU functions.

They add the first 4 rules into Simplex and replaced the Success rule with ReluSuccess rule.

image

The PivotForRelu rule allows a basic variable appearing in a ReLU to be pivoted so that either Updateb or Updatef can be applied.
The Updateb and Updatef rules are added to handle the temporary violation.
The ReluSplit rule is used for splitting on certain ReLU connections, guessing whether they
are active or inactive.

They have proved that the Reluplex is complete and sound.

Efficiently Implementing Reluplex

Tighter Bound Derivation

We can derive the strict low and upper boundaries of variables with the following rules.
image
Sometimes, that will help us eliminate some variables and accelerate the procedure.

Derived Bounds and Conflict Analysis

This is the same with the conflict analysis in SAT and SMT solvers.

Floating-Point Arithmetic and Under-Approximation

Sacrify accuracy sometimes can help get better speed and without affection on the result.

Experiments

ACAS Xu is an airplane collision avoidance system. In the beginning, this system was controlled by looking-up tables, which consumed more than 2GB memory, but with the neural network controller, this controller only requires 3MB memory. They conducted the verification of ACAS Xu system neural network controller with Reluplex.

They not only verified the general safety properties, but also the robustness properties.

They also compared the Reluplex with state-of-art SMT and LP solvers. In the benchmarks that require split, the Reluplex beats all the other solvers, only in some benchmarks which do not require a split, one LP solver performed better than Reluplex.

Limitations and Comments

Limitation on Scalability

Although they have tried many practical ways to improve the performance, the time to verify a neural network is still very long. Especially when the size of the neural network becomes large.
In AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation, the experiments show that the Reluplex’s performance decreases significantly when the neural network’s size becomes larger than 6*100 for the FNN of MNIST dataset.
image

Only Support Specifical Neural Network with Specifical Layers

Only support fully connected neural network with Relu activation function. Some other widely used neural network such as CNN as well as max-pooling layer are not supported.

References

[1] Guy Katz, Clark Barrett, David Dill, Kyle Julian, and Mykel Kochenderfer. Reluplex: An efficient SMT solver for verifying deep neural networks. In CAV 2017, 2017.

[2] Pulina, Luca and Armando Tacchella. “An Abstraction-Refinement Approach to Verification of Artificial Neural Networks.” CAV (2010).

[3] O. Bastani, Y. Ioannou, L. Lampropoulos, D. Vytiniotis, A. Nori, and A. Criminisi. Measuring Neural Net Robustness with Constraints. In Proc. 30th Conf. on Neural Information Processing Systems (NIPS), 2016.

[4] X. Huang, M. Kwiatkowska, S. Wang, and M. Wu. Safety Verification of Deep Neural Networks, 2016. Technical Report. http://arxiv.org/abs/1610.06940.

[5] J.-B. Jeannin, K. Ghorbal, Y. Kouskoulas, R. Gardner, A. Schmidt, E. Zawadzki, and A. Platzer. A Formally Verified Hybrid System for the Next-Generation Airborne Collision Avoidance System. In Proc. 21st Int. Conf. on Tools and Algorithms for the Construction and Analysis of Systems (TACAS)

[6] T. Gehr, M. Mirman, D. Drachsler-Cohen, P. Tsankov, S. Chaudhuri and M. Vechev, “AI2: Safety and Robustness Certification of Neural Networks with Abstract Interpretation,” 2018 IEEE Symposium on Security and Privacy (SP)