Under the needs of processing huge amounts of data, providing high-quality service, and protecting user privacy in artificial intelligence of things (AIoT), federated learning (FL) has been treated as a… Click to show full abstract
Under the needs of processing huge amounts of data, providing high-quality service, and protecting user privacy in artificial intelligence of things (AIoT), federated learning (FL) has been treated as a promising technique to facilitate distributed learning with privacy protection. Although the importance of developing privacy-preserving FL has attracted a lot of attentions, the existing research only focuses on FL with independent identically distributed (i.i.d.) data and lacks study of non-i.i.d. scenario. What is worse, the assumption of i.i.d. data is impractical, reducing the performance of privacy protection in real applications. In this article, we carry out an innovative exploration of privacy protection in FL with non-i.i.d. data. First, a thorough analysis on privacy leakage in FL is conducted with proving the performance upper bound of privacy inference attack. Based on our analysis, a novel algorithm, 2DP-FL, is designed to achieve differential privacy by adding noise during training local models and when distributing global model. Especially, our 2DP-FL algorithm has a flexibility of noise addition to meet various needs and has a convergence upper bound. Finally, the real-data experiments can validate the results of our the oretical analysis and the advantages of 2DP-FL in privacy protection, learning convergence, and model accuracy.
               
Click one of the above tabs to view related content.