Deep Learning Robustness Verification for Few-Pixel Attacks
While successful, neural networks have been shown to be vulnerable to adversarial example attacks. In few-pixel attacks, the attacker picks t pixels from the image and arbitrarily perturbs them. To understand the robustness level of a network to these attacks, it is required to check the robustness of the network to perturbations of every set of t pixels. Since the number of sets is exponentially large, existing robustness verifiers, which can reason about a single set of pixels at a time, are impractical for this kind of robustness verification. We introduce Calzone, an a few-pixel attacks robustness verifier for neural networks.
To the best of our knowledge, Calzone is the first to provide a sound and complete analysis for few-pixel attacks. Calzone relies on dynamic programming and on combinatorical objects called covering designs to eliminate the need for iterating over all t-sized subsets, and thus to reduce verification time. We show that typically, Calzone verifies robustness within a few minutes. We compare it to a MILP baseline and show that it does not scale already for t = 3.