Sign Up to like & get
recommendations!
0
Published in 2021 at "IEEE Transactions on Dependable and Secure Computing"
DOI: 10.1109/tdsc.2020.3021407
Abstract: Deep neural networks (DNNs) have been proven vulnerable to backdoor attacks, where hidden features (patterns) trained to a normal model, which is only activated by some specific input (called triggers), trick the model into producing…
read more here.
Keywords:
deep neural;
neural networks;
backdoor attacks;
invisible backdoor ... See more keywords
Sign Up to like & get
recommendations!
2
Published in 2023 at "IEEE Transactions on Information Forensics and Security"
DOI: 10.1109/tifs.2023.3280032
Abstract: For model privacy, local model parameters in federated learning shall be obfuscated before sent to the remote aggregator. This technique is referred to as secure aggregation. However, secure aggregation makes model poisoning attacks such as…
read more here.
Keywords:
aggregation;
federated learning;
secure aggregation;
model ... See more keywords
Sign Up to like & get
recommendations!
1
Published in 2021 at "Neural Computation"
DOI: 10.1162/neco_a_01376
Abstract: Backdoor data poisoning attacks add mislabeled examples to the training set, with an embedded backdoor pattern, so that the classifier learns to classify to a target class whenever the backdoor pattern is present in a…
read more here.
Keywords:
training set;
scene plausible;
backdoor;
plausible perceptible ... See more keywords
Sign Up to like & get
recommendations!
0
Published in 2023 at "Applied Sciences"
DOI: 10.3390/app13074599
Abstract: The successful outcomes of deep learning (DL) algorithms in diverse fields have prompted researchers to consider backdoor attacks on DL models to defend them in practical applications. Adversarial examples could deceive a safety-critical system, which…
read more here.
Keywords:
trigger backdoor;
backdoor attacks;
segmentation;
backdoor ... See more keywords
Sign Up to like & get
recommendations!
2
Published in 2023 at "Entropy"
DOI: 10.3390/e25020220
Abstract: Natural language processing (NLP) models based on deep neural networks (DNNs) are vulnerable to backdoor attacks. Existing backdoor defense methods have limited effectiveness and coverage scenarios. We propose a textual backdoor defense method based on…
read more here.
Keywords:
defense method;
backdoor defense;
defense;
backdoor ... See more keywords