LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Explore Adversarial Attack via Black Box Variational Inference

Photo by frostroomhead from unsplash

From the perspective of probability, we propose a new method for black-box adversarial attack via black-box variational inference (BBVI), where the knowledge of victim model is unavailable. Instead of obtaining… Click to show full abstract

From the perspective of probability, we propose a new method for black-box adversarial attack via black-box variational inference (BBVI), where the knowledge of victim model is unavailable. Instead of obtaining a single point, the proposed method focuses on approximating the probability distribution of adversarial examples. Thus, infinite adversarial examples can be drawn from the inferred distribution. Although the Monte Carlo estimator in BBVI is unbiased, its variance brings unstable gradient estimation, which leads to poor attack performance and low query efficiency. To reduce variance, we improve the BBVI with importance sampling which guided by a surrogate model to obtain a better estimator of gradient, which enhances both success rate and query efficiency. Extensive experiments on ImageNet dataset well demonstrate the outperformance of the proposed method compared with prior arts.

Keywords: via black; black box; box; adversarial attack; attack via

Journal Title: IEEE Signal Processing Letters
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.