Adversarial images have become an increasing concern in real-world image recognition applications with deep neural networks (DNN). We observed that all the architectures in DNN use one-hot encoding after a… Click to show full abstract
Adversarial images have become an increasing concern in real-world image recognition applications with deep neural networks (DNN). We observed that all the architectures in DNN use one-hot encoding after a softmax layer. The attacker can make minute modifications in the adversarial example rendering those changes imperceptible to a human observer. The Hamming distance between pairs of one-hot codes is two, independent of the number of classes, and we gather this is at the heart of a successful attack. This paper proposes increasing the Hamming distance between class codes as a generic defense against adversarial attacks from the above observation. It is only natural to use error-correcting codes for this task. We present an experimental comparison of our proposal against SOTA adversarial defenses. We conducted tests for six types of white-box attacks and one black-box scenario described in the literature. In all the experiments, our proposal’s precision surpasses the state-of-art defenses by a wide margin.
               
Click one of the above tabs to view related content.