LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Boosting cross‐task adversarial attack with random blur

Photo from wikipedia

Deep neural networks are highly vulnerable to adversarial examples, and these adversarial examples stay malicious when transferred to other neural networks. Many works exploit this transferability of adversarial examples to… Click to show full abstract

Deep neural networks are highly vulnerable to adversarial examples, and these adversarial examples stay malicious when transferred to other neural networks. Many works exploit this transferability of adversarial examples to execute black‐box attacks. However, most existing adversarial attack methods rarely consider cross‐task black‐box attacks that are more similar to real‐world scenarios. In this paper, we propose a class of random blur‐based iterative methods (RBMs) to enhance the success rates of cross‐task black‐box attacks. By integrating the random erasing and Gaussian blur into the iterative gradient‐based attacks, the proposed RBM augments the diversity of adversarial perturbation and alleviates the marginal effect caused by iterative gradient‐based methods, generating the adversarial examples of stronger transferability. Experimental results on ImageNet and PASCAL VOC data sets show that the proposed RBM generates more transferable adversarial examples on image classification models, thereby successfully attacking cross‐task black‐box object detection models.

Keywords: random; cross task; adversarial examples; blur

Journal Title: International Journal of Intelligent Systems
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.