Deep neural networks (DNNs) have gained widespread adoption in computer vision. Unfortunately, state‐of‐the‐art DNNs are vulnerable to adversarial example (AE) attacks, where an adversary introduces imperceptible perturbations to a test… Click to show full abstract
Deep neural networks (DNNs) have gained widespread adoption in computer vision. Unfortunately, state‐of‐the‐art DNNs are vulnerable to adversarial example (AE) attacks, where an adversary introduces imperceptible perturbations to a test example for defrauding DNNs. The obstacles have urged intensive research on improving the DNN robustness via adversarial training, that is, the clean data set is blended with adversarial examples to carry out training. However, the adversarial example attack technologies are open‐ended, and the adversarial training is insufficient to focus on improving robustness performance. To circumvent this limitation, we mitigate adversarial example attacks from another perspective, which aims at detecting adversarial examples. Feature autoencoder detector (FADetector), a novel defense framework that exploits feature knowledge is proposed. One of the hallmarks of FADetector is to not involve adversarial examples to train the detector. Our extensive evaluation on MNIST and CIFAR‐10 data sets demonstrates that our defense outperforms the conventional autoencoder detectors in terms of detection accuracy.
               
Click one of the above tabs to view related content.