ABSTRACT Federated learning (FL) is an emerging distributed machine learning technique. However, when dealing with heterogeneous data, a shared global model cannot generalise all devices' local data. Furthermore, the FL… Click to show full abstract
ABSTRACT Federated learning (FL) is an emerging distributed machine learning technique. However, when dealing with heterogeneous data, a shared global model cannot generalise all devices' local data. Furthermore, the FL training process necessitates frequent parameter communication, which interferes with the limited bandwidth and unstable connections of participating devices. These two issues have a significant impact on FL's effectiveness and efficiency. In this paper, an enhanced communication-efficient personalised FL technique, FedGB, is proposed. Different from existing approaches, FedGB believes that only interacting common information from training results on different devices can improve local personalised training results more effectively. FedGB dynamically selects the backbone structures in the local models to represent the dynamically determined backbone information (common features) in the global model for aggregation. Only interacting common features between different nodes reduce the impact of heterogeneous data to a certain extent. The dynamic adaptive sub-model selection avoids the impact of manually setting the scale of sub-model. FedGB can thus reduce communication overheads while maintaining inference accuracy. The results obtained in a variety of experimental settings show that FedGB can effectively improve communication efficiency and inference accuracy.
               
Click one of the above tabs to view related content.