Federated Learning is a distributed machine learning framework, which mainly adopts cloud-edge collaborative computing mode and supports multiple participants to train models without directly sharing local data. However, participants’ sensitive… Click to show full abstract
Federated Learning is a distributed machine learning framework, which mainly adopts cloud-edge collaborative computing mode and supports multiple participants to train models without directly sharing local data. However, participants’ sensitive information may still be leaked through their gradients. Besides, incorrect aggregated results returned by the aggregation server may reduce the effect of joint modeling. This article proposes a privacy-preserving and verifiable federated learning method called PPVerifier to support privacy protection and verification of aggregated results in the cloud-edge collaborative computing environment. By the integrating Paillier homomorphic encryption and random number generation technique, all gradients and their ciphertexts can be protected. Meanwhile, an additive secret-sharing scheme is introduced to resist potential collusion attacks among the aggregation server, malicious participants, and edge nodes. Moreover, a verification scheme based on discrete logarithm is proposed, which can not only verify the correctness of aggregated results, but also discover lazy aggregation servers, and the verification overhead can be reduced by over half compared with a bilinear aggregate signature method. Finally, theoretical analysis and experiments conducted on the MNIST data set prove that our proposed method can achieve gradient protection and correctness verification of the aggregated results with higher efficiency.
               
Click one of the above tabs to view related content.