Recently academia and industry has growing interest in the sixth generation network, which aims to support a rich range of applications with higher capacity and greater coverage than existing 5G… Click to show full abstract
Recently academia and industry has growing interest in the sixth generation network, which aims to support a rich range of applications with higher capacity and greater coverage than existing 5G connections. One of such promising applications that can benefit from 6G is Decentralised Federated Learning, a privacy-preserving machine learning paradigm. Also, it relies heavily on peer-to-peer mobile connection among edge and mobile devices, instead of a powerful central server on the cloud. However, the data and device heterogeneity, and highly dynamic environment in mobile networks pose challenges to the performance of federated learning. In this paper, we propose a data redistribution phase that balances the data distribution on different participating devices to a certain degree, which can further increase the system performance in the training phase. To derive our method, we first model this problem as a bargaining game, the equilibrium of which is formalised as an optimisation problem. Then we propose two algorithms to solve it: a centralised one, and a decentralised one that each participant executes without centralised coordination. We further improve the energy efficiency of the decentralised algorithm by introducing several heuristics. We evaluate the proposed system with both simulation and DNN training tasks on large scale FEMNIST-based datasets.
               
Click one of the above tabs to view related content.