This paper proposes a locally-optimal generalized likelihood ratio test (LO-GLRT) for detecting targeted attacks on a classifier, where the attacks add a norm-bounded targeted universal adversarial perturbation (UAP) to the… Click to show full abstract
This paper proposes a locally-optimal generalized likelihood ratio test (LO-GLRT) for detecting targeted attacks on a classifier, where the attacks add a norm-bounded targeted universal adversarial perturbation (UAP) to the classifier’s input. The paper includes both an analysis of the test as well as its empirical evaluation. The analysis provides an expression for the approximate lower bound of the detection probability, and the empirical evaluation shows this approximation to be similar to the actual detection probability. Since the LO-GLRT requires the score function of the input distribution, which is usually unknown in practice, we study the LO-GLRT for a learned surrogate input distribution. Specifically, we use a Gaussian distribution over the input subvectors as the surrogate distribution, for its mathematical tractability and computational efficiency. We evaluate the detector for several popular image classifiers and datasets, and compare the statistical and computational performance with the perturbation rectifying network (PRN) detector, another successful approach for detecting the UAPs. The LO-GLRT outperforms the PRN detector on both counts, with a running time at least 100 times lower than that of the PRN detector.
               
Click one of the above tabs to view related content.