Abstract. Neural networks are vulnerable to various adversarial perturbations added to the input. Highly sparse adversarial perturbations are difficult to identify, which is especially dangerous to network security. Previous research… Click to show full abstract
Abstract. Neural networks are vulnerable to various adversarial perturbations added to the input. Highly sparse adversarial perturbations are difficult to identify, which is especially dangerous to network security. Previous research has shown that ℓ0-norm has good sparsity but is challenging to solve. We use ℓq-norm to approach ℓ0-norm and propose a new white-box algorithm to generate adversarial examples aiming at minimizing ℓq distance of the original image. Meanwhile, we extend the adversarial attack to facial anti-spoofing task in the field of face recognition security. This extension enables us to generate sparse and unobservable facial attack perturbation. To increase the diversity of the data set, we make a new data set of real and fake facial images containing images produced by various latest spoofing methods. Extensive experiments show that our proposed method can effectively generate a sparse perturbation and successfully mislead the classifier in multi-classification tasks and facial anti-spoofing tasks.
               
Click one of the above tabs to view related content.