Articles with "adversarial examples" as a keyword



Photo from wikipedia

Feature autoencoder for detecting adversarial examples

Sign Up to like & get
recommendations!
Published in 2022 at "International Journal of Intelligent Systems"

DOI: 10.1002/int.22889

Abstract: Deep neural networks (DNNs) have gained widespread adoption in computer vision. Unfortunately, state‐of‐the‐art DNNs are vulnerable to adversarial example (AE) attacks, where an adversary introduces imperceptible perturbations to a test example for defrauding DNNs. The… read more here.

Keywords: detecting adversarial; adversarial examples; adversarial example; examples feature ... See more keywords
Photo from wikipedia

Boosting cross‐task adversarial attack with random blur

Sign Up to like & get
recommendations!
Published in 2022 at "International Journal of Intelligent Systems"

DOI: 10.1002/int.22932

Abstract: Deep neural networks are highly vulnerable to adversarial examples, and these adversarial examples stay malicious when transferred to other neural networks. Many works exploit this transferability of adversarial examples to execute black‐box attacks. However, most… read more here.

Keywords: random; cross task; adversarial examples; blur ... See more keywords
Photo from wikipedia

Robustness to adversarial examples can be improved with overfitting

Sign Up to like & get
recommendations!
Published in 2020 at "International Journal of Machine Learning and Cybernetics"

DOI: 10.1007/s13042-020-01097-4

Abstract: Deep learning (henceforth DL) has become most powerful machine learning methodology. Under specific circumstances recognition rates even surpass those obtained by humans. Despite this, several works have shown that deep learning produces outputs that are… read more here.

Keywords: robustness adversarial; machine learning; improved overfitting; adversarial examples ... See more keywords
Photo from wikipedia

Assessing Optimizer Impact on DNN Model Sensitivity to Adversarial Examples

Sign Up to like & get
recommendations!
Published in 2019 at "IEEE Access"

DOI: 10.1109/access.2019.2948658

Abstract: Deep Neural Networks (DNNs) have been gaining state-of-the-art achievement compared with many traditional Machine Learning (ML) models in diverse fields. However, adversarial examples challenge the further deployment and application of DNNs. Analysis has been carried… read more here.

Keywords: sensitivity adversarial; impact; dnn model; model ... See more keywords
Photo from wikipedia

Harden Deep Convolutional Classifiers via K-Means Reconstruction

Sign Up to like & get
recommendations!
Published in 2020 at "IEEE Access"

DOI: 10.1109/access.2020.3024197

Abstract: Adversarial examples are carefully perturbed input examples that aim to mislead the deep neural network models into producing unexpected outputs. In this paper, we employ a K-means clustering algorithm as a pre-processing method to defend… read more here.

Keywords: pre processing; harden deep; convolutional classifiers; deep convolutional ... See more keywords
Photo by erdaest from unsplash

ManiGen: A Manifold Aided Black-Box Generator of Adversarial Examples

Sign Up to like & get
recommendations!
Published in 2020 at "IEEE Access"

DOI: 10.1109/access.2020.3029270

Abstract: From recent research work, it has been shown that neural network (NN) classifiers are vulnerable to adversarial examples which contain special perturbations that are ignored by human eyes while can mislead NN classifiers. In this… read more here.

Keywords: box generator; manigen; box; black box ... See more keywords
Photo by radowanrehan from unsplash

Hadamard’s Defense Against Adversarial Examples

Sign Up to like & get
recommendations!
Published in 2021 at "IEEE Access"

DOI: 10.1109/access.2021.3106855

Abstract: Adversarial images have become an increasing concern in real-world image recognition applications with deep neural networks (DNN). We observed that all the architectures in DNN use one-hot encoding after a softmax layer. The attacker can… read more here.

Keywords: hadamard defense; defense adversarial; adversarial examples; defense ... See more keywords
Photo from wikipedia

Black-Box Audio Adversarial Attack Using Particle Swarm Optimization

Sign Up to like & get
recommendations!
Published in 2022 at "IEEE Access"

DOI: 10.1109/access.2022.3152526

Abstract: The development of artificial neural networks and artificial intelligence has helped to address problems and improve services in various fields, such as autonomous driving, image classification, medical diagnosis, and speech recognition. However, this technology has… read more here.

Keywords: black box; optimization; adversarial attack; adversarial examples ... See more keywords
Photo by rhsupplies from unsplash

WordRevert: Adversarial Examples Defence Method for Chinese Text Classification

Sign Up to like & get
recommendations!
Published in 2022 at "IEEE Access"

DOI: 10.1109/access.2022.3157521

Abstract: Adversarial examples can evade the detection of text classification models based on Deep Neural Networks (DNNs), thus posing a potential security threat to the system. To address this problem, we propose an adversarial example defense… read more here.

Keywords: method; detection; classification; adversarial examples ... See more keywords
Photo from wikipedia

ARGAN: Adversarially Robust Generative Adversarial Networks for Deep Neural Networks against Adversarial Examples

Sign Up to like & get
recommendations!
Published in 2022 at "IEEE Access"

DOI: 10.1109/access.2022.3160283

Abstract: An adversarial example, which is an input instance with small, intentional feature perturbations to machine learning models, represents a concrete problem in Artificial intelligence safety. As an emerging defense method to defend against adversarial examples,… read more here.

Keywords: networks based; adversarial examples; based defense; adversarial networks ... See more keywords
Photo from wikipedia

Adversarial Attack Using Sparse Representation of Feature Maps

Sign Up to like & get
recommendations!
Published in 2022 at "IEEE Access"

DOI: 10.1109/access.2022.3222531

Abstract: Deep neural networks can be fooled by small imperceptible perturbations called adversarial examples. Although these examples are carefully crafted, they involve two major concerns. In some cases, adversarial examples generated are much larger than minimal… read more here.

Keywords: adversarial examples; feature; adversarial attack; feature maps ... See more keywords