LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Practical Private Aggregation in Federated Learning Against Inference Attack

Photo by hajjidirir from unsplash

Federated learning (FL) enables multiple worker devices share local models trained on their private data to collaboratively train a machine learning model. However, local models are proved to imply the… Click to show full abstract

Federated learning (FL) enables multiple worker devices share local models trained on their private data to collaboratively train a machine learning model. However, local models are proved to imply the information about the private data and, thus, introduce much vulnerabilities to inference attacks where the adversary reconstructs or infers the sensitive information about the private data (e.g., labels, memberships, etc.) from the local models. To address this issue, existing works proposed homomorphic encryption, secure multiparty computation (SMC), and differential privacy methods. Nevertheless, the homomorphic encryption and SMC-based approaches are not applicable to large-scale FL scenarios as they incur substantial additional communication and computation costs and require secure channels to delivery keys. Moreover, differential privacy brings a substantial tradeoff between privacy budget and model performance. In this article, we propose a novel FL framework, which can protect the data privacy of worker devices against the inference attacks with minimal accuracy cost and low computation and communication cost, and does not rely on the secure pairwise communication channels. The main idea is to generate the lightweight keys based on computational Diffie–Hellman (CDH) problem to encrypt the local models, and the FL server can only get the sum of the local models of all worker devices without knowing the exact local model of any specific worker device. The extensive experimental results on three real-world data sets validate that the proposed FL framework can protect the data privacy of worker devices, and only incurs a small constant of computation and communication cost and a drop in test accuracy of no more than 1%.

Keywords: worker; federated learning; privacy; local models; worker devices; inference

Journal Title: IEEE Internet of Things Journal
Year Published: 2023

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.