Federated learning (FL) enables a large number of edge devices to learn a shared model without data sharing collaboratively. However, the imbalanced data distribution among users poses challenges to the… Click to show full abstract
Federated learning (FL) enables a large number of edge devices to learn a shared model without data sharing collaboratively. However, the imbalanced data distribution among users poses challenges to the convergence performance of FL. Group-based FL is a novel framework to improve FL performance, which appropriately groups users and allows localized aggregations within the group before a global aggregation. Nevertheless, most existing Group-based FL methods are K-means-based approaches that need to explicitly specify the number of groups, which may severely reduce the efficacy and optimality of the proposed solutions. In this paper, we propose a grouping mechanism called Auto-Group, which can automatically group users without specifying the number of groups. Specifically, various grouping strategies with different numbers of groups are generated with our mechanism. In particular, equipped with an optimized Genetic Algorithm, Auto-Group ensures that the data distribution of each group is similar to the global distribution, further reducing the communication delay. We conduct extensive experiments in various settings to evaluate Auto-Group. Experimental results show that, compared with the baselines, our mechanism can significantly improve the model accuracy while accelerating the training speed.
               
Click one of the above tabs to view related content.