LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Sinkhorn Adversarial Attack and Defense

Photo by nampoh from unsplash

Adversarial attacks have been extensively investigated in the recent past. Quite interestingly, a majority of these attacks primarily work in the $l_{p}$ space. In this work, we propose a novel… Click to show full abstract

Adversarial attacks have been extensively investigated in the recent past. Quite interestingly, a majority of these attacks primarily work in the $l_{p}$ space. In this work, we propose a novel approach for generating adversarial samples using Wasserstein distance. Unlike previous approaches, we use an unbalanced optimal transport formulation which is naturally suited for images. We first compute an adversarial sample using a gradient step and then project the resultant image into Wasserstein ball with respect to original sample. The attack introduces perturbation in the form of pixel mass distribution which is guided by a cost metric. Elaborate experiments on MNIST, Fashion-MNIST, CIFAR-10 and Tiny ImageNet demonstrate a sharp decrease in the performance of state-of-art classifiers. We also perform experiments with adversarially trained classifiers and show that our system achieves superior performance in terms of adversarial defense against several state-of-art attacks. Our code and pre-trained models are available at https://bit.ly/2SQBR4E.

Keywords: attack defense; sinkhorn adversarial; adversarial attack; defense; attack

Journal Title: IEEE Transactions on Image Processing
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.