LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Hadamard’s Defense Against Adversarial Examples

Photo by radowanrehan from unsplash

Adversarial images have become an increasing concern in real-world image recognition applications with deep neural networks (DNN). We observed that all the architectures in DNN use one-hot encoding after a… Click to show full abstract

Adversarial images have become an increasing concern in real-world image recognition applications with deep neural networks (DNN). We observed that all the architectures in DNN use one-hot encoding after a softmax layer. The attacker can make minute modifications in the adversarial example rendering those changes imperceptible to a human observer. The Hamming distance between pairs of one-hot codes is two, independent of the number of classes, and we gather this is at the heart of a successful attack. This paper proposes increasing the Hamming distance between class codes as a generic defense against adversarial attacks from the above observation. It is only natural to use error-correcting codes for this task. We present an experimental comparison of our proposal against SOTA adversarial defenses. We conducted tests for six types of white-box attacks and one black-box scenario described in the literature. In all the experiments, our proposal’s precision surpasses the state-of-art defenses by a wide margin.

Keywords: hadamard defense; defense adversarial; adversarial examples; defense

Journal Title: IEEE Access
Year Published: 2021

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.