Emerging machine learning (ML) technologies, in combination with the increasing computational power of mobile devices, lead to the extensive adoption of ML-based applications. Different from conventional model training that needs… Click to show full abstract
Emerging machine learning (ML) technologies, in combination with the increasing computational power of mobile devices, lead to the extensive adoption of ML-based applications. Different from conventional model training that needs to collect all the user data in centralized cloud servers, federated learning (FL) has recently drawn increasing research attention as it enables privacy-preserving model training. With FL, decentralized edge devices in participation, train their model copies locally over their siloed datasets, and periodically synchronize the model parameters. However, model training is computationally extensive which easily drains the battery of mobile devices. In addition, due to the uneven distribution of siloed datasets, the shared model may become biased. To address the efficiency and fairness concerns in a resource-constrained federated learning setting, in this paper, we propose Eiffel to judiciously select mobile devices to participate in the global model aggregation, and adaptively adjust the frequency of local and global model updates. Eiffel aims to make scheduling and coordination for the federated learning towards both resource efficiency and model fairness. We have conducted theoretical analysis of Eiffel from the perspectives of fairness and convergence. Extensive experiments with a wide variety of real-world datasets and models, both on a networked prototype system and in a larger-scale simulated environment, have demonstrated that while maintaining similar accuracy performance, Eiffel outperforms existing baselines with respect to reducing communication overhead by up to 6× for higher efficiency and improving the fairness metric by up to 57% compared to the state-of-the-art algorithms.
               
Click one of the above tabs to view related content.