LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Multifeature Collaborative Adversarial Attack in Multimodal Remote Sensing Image Classification

Photo from wikipedia

Deep neural networks have strong feature learning ability, but their vulnerability cannot be ignored. Current research shows that deep learning models are threatened by adversarial examples in remote sensing (RS)… Click to show full abstract

Deep neural networks have strong feature learning ability, but their vulnerability cannot be ignored. Current research shows that deep learning models are threatened by adversarial examples in remote sensing (RS) classification tasks, and their robustness drops sharply in the face of adversarial attacks. Therefore, many adversarial attack methods have been studied to predict the risks faced by a network. However, the existing adversarial attack methods mainly focus on single-modal image classification networks, and the rapid growth of RS data makes multimodal RS image classification a research hotspot. Generating multimodal adversarial examples needs to consider a high attack success rate, subtle perturbation, and collaborative attack ability between different modalities. In this article, we investigate the vulnerability of multimodal RS classification networks and propose a multifeature collaborative adversarial network (MFCANet) for generating multimodal adversarial examples. Two modality-specific generators are designed to generate the multimodal collaborative perturbations with strong attack ability, and two modality-specific discriminators make the generated multimodal adversarial examples closer to the real instances. In addition, a modality-specific generative loss and a modality-specific discriminative loss are proposed, and an alternating optimization strategy is designed for training the proposed MFCANet. Extensive experiments are carried out on the International Society for Photogrammetry and Remote Sensing (ISPRS) Vaihingen 2D dataset and ISPRS Potsdam 2D dataset. The results show that the attack performance of the proposed method is stronger than that of the fast gradient sign method (FGSM), project gradient descent (PGD), and Carlini and Wagner (C&W) attack methods.

Keywords: remote sensing; classification; adversarial attack; attack; image classification

Journal Title: IEEE Transactions on Geoscience and Remote Sensing
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.