Sign Up to like & get
recommendations!
1
Published in 2022 at "International Journal of Intelligent Systems"
DOI: 10.1002/int.22889
Abstract: Deep neural networks (DNNs) have gained widespread adoption in computer vision. Unfortunately, state‐of‐the‐art DNNs are vulnerable to adversarial example (AE) attacks, where an adversary introduces imperceptible perturbations to a test example for defrauding DNNs. The…
read more here.
Keywords:
detecting adversarial;
adversarial examples;
adversarial example;
examples feature ... See more keywords
Sign Up to like & get
recommendations!
0
Published in 2021 at "Journal of Circuits, Systems and Computers"
DOI: 10.1142/s0218126622500074
Abstract: Neural network technology has achieved good results in many tasks, such as image classification. However, for some input examples of neural networks, after the addition of designed and imperceptible perturbations to the examples, these adversarial…
read more here.
Keywords:
neural network;
adversarial example;
example;
example generation ... See more keywords
Sign Up to like & get
recommendations!
1
Published in 2023 at "Wireless Communications and Mobile Computing"
DOI: 10.1155/2023/7669696
Abstract: The intelligent imaging sensors in IoT benefit a lot from the continuous renewal of deep neural networks (DNNs). However, the appearance of adversarial examples leads to skepticism about the trustworthiness of DNNs. Malicious perturbations, even…
read more here.
Keywords:
denoising decorated;
adversarial example;
framework;
detection ... See more keywords
Sign Up to like & get
recommendations!
1
Published in 2023 at "Entropy"
DOI: 10.3390/e25030487
Abstract: Adversarial example generation techniques for neural network models have exploded in recent years. In the adversarial attack scheme for image recognition models, it is challenging to achieve a high attack success rate with very few…
read more here.
Keywords:
example generation;
adversarial example;
differential evolution;
method ... See more keywords
Sign Up to like & get
recommendations!
2
Published in 2023 at "IEEE transactions on neural networks and learning systems"
DOI: 10.48550/arxiv.2305.03173
Abstract: Deep neural networks (DNNs) are vulnerable to adversarial examples, while adversarial attack models, e.g., DeepFool, are on the rise and outrunning adversarial example detection techniques. This article presents a new adversarial example detector that outperforms…
read more here.
Keywords:
sentiment analysis;
detection;
adversarial example;
new adversarial ... See more keywords