LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Strengthening Robustness Under Adversarial Attacks Using Brain Visual Codes

Photo from wikipedia

The vulnerability of computational models to adversarial examples highlights the differences in the ways humans and machines process visual information. Motivated by human perception invariance in object recognition, we aim… Click to show full abstract

The vulnerability of computational models to adversarial examples highlights the differences in the ways humans and machines process visual information. Motivated by human perception invariance in object recognition, we aim to incorporate human brain representations for training a neural network. We propose a multi-modal approach that integrates visual input and the corresponding encoded brain signals to improve the adversarial robustness of the model. We investigate the effects of visual attacks of various strengths on an image classification task. Our experiments show that the proposed multi-modal framework achieves more robust performance against the increasing amount of adversarial perturbation than the baseline methods. Remarkably, in a black-box setting, our framework achieves a performance improvement of at least 7.54% and 5.97% on the MNIST and CIFAR-10 datasets, respectively. Finally, we conduct an ablation study to justify the necessity and significance of incorporating visual brain representations.

Keywords: adversarial attacks; brain; using brain; attacks using; strengthening robustness; robustness adversarial

Journal Title: IEEE Access
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.