Federated learning facilitates the collaborative training of a global model among distributed clients without sharing their training data. Secure aggregation, a new security primitive for federated learning, aims to preserve… Click to show full abstract
Federated learning facilitates the collaborative training of a global model among distributed clients without sharing their training data. Secure aggregation, a new security primitive for federated learning, aims to preserve the confidentiality of both local models and training data. Unfortunately, existing secure aggregation solutions fail to defend against Byzantine failures that are common in distributed computing systems. In this work, we propose a new secure and efficient aggregation framework, SEAR, for Byzantine-robust federated learning. Relying on the trusted execution environment, i.e., Intel SGX, SEAR protects clients’ private models while enabling Byzantine resilience. Considering the limitation of the current Intel SGX's architecture (i.e., the limited trusted memory), we propose two data storage modes to efficiently implement aggregation algorithms efficiently in SGX. Moreover, to balance the efficiency and performance of aggregation, we propose a sampling-based method to efficiently detect Byzantine failures without degrading the global model's performance. We implement and evaluate SEAR in a LAN environment, and the experiment results show that SEAR is computationally efficient and robust to Byzantine adversaries. Compared to the previous practical secure aggregation framework, SEAR improves aggregation efficiency by 4-6 times while supporting Byzantine resilience at the same time.
               
Click one of the above tabs to view related content.