Deep Neural Networks (DNNs) have been gaining state-of-the-art achievement compared with many traditional Machine Learning (ML) models in diverse fields. However, adversarial examples challenge the further deployment and application of… Click to show full abstract
Deep Neural Networks (DNNs) have been gaining state-of-the-art achievement compared with many traditional Machine Learning (ML) models in diverse fields. However, adversarial examples challenge the further deployment and application of DNNs. Analysis has been carried out for studying the reasons of DNNs’ vulnerability to adversarial perturbation and focused on model architecture. No research has been done on investigating the impact of optimization algorithms (namely, optimizers in DNNs) employed in training DNN models on models’ sensitivity to adversarial examples. This paper aims to study this impact from an experimental perspective. We analyze the sensitivity of a model not only from the aspect of white-box and black-box attack setups, but also from the aspect of different types of datasets. Four common optimizers, SGD, RMSprop, Adadelta, and Adam, are investigated on structured and unstructured datasets. Extensive experiment results indicate that an optimization algorithm does pose effects on the DNN model sensitivity to adversarial examples. That is, when training models and generating adversarial examples, Adam optimizer can generate better quality adversarial examples for structured datasets, and Adadelta optimizer can generate better quality adversarial examples for unstructured datasets. In addition, the choice of optimizers does not affect the transferability of adversarial examples.
               
Click one of the above tabs to view related content.