LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

ADFL: A Poisoning Attack Defense Framework for Horizontal Federated Learning

Photo from wikipedia

Recently, federated learning has received widespread attention, which will promote the implementation of artificial intelligence technology in various fields. Privacy-preserving technologies are applied to users’ local models to protect users’… Click to show full abstract

Recently, federated learning has received widespread attention, which will promote the implementation of artificial intelligence technology in various fields. Privacy-preserving technologies are applied to users’ local models to protect users’ privacy. Such operations make the server not see the true model parameters of each user, which opens wider door for a malicious user to upload malicious parameters and make the training result converge to an ineffective model. To solve this problem, in this article, we propose a poisoning attack defense framework for horizontal federated learning systems called ADFL. Specifically, we design a proof generation method for users to generate proofs to verify whether it is malicious or not. An aggregation rule is also proposed to make sure the global model has a high accuracy. Several verification experiments were conducted and the results show that our method can detect malicious user effectively and ensure the global model has a high accuracy.

Keywords: horizontal federated; attack defense; federated learning; poisoning attack; framework horizontal; defense framework

Journal Title: IEEE Transactions on Industrial Informatics
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.