Federated learning (FL) is an emerging privacy-preserving paradigm that enables multiple participants collaboratively to train a global model without uploading raw data. Considering heterogeneous computing and communication capabilities of different… Click to show full abstract
Federated learning (FL) is an emerging privacy-preserving paradigm that enables multiple participants collaboratively to train a global model without uploading raw data. Considering heterogeneous computing and communication capabilities of different participants, asynchronous FL can avoid the stragglers effect in synchronous FL and adapts to scenarios with vast participants. Both staleness and non-IID data in asynchronous FL would reduce the model utility. However, there exists an inherent contradiction between the solutions to the two problems. That is, mitigating the staleness requires selecting less but consistent gradients while coping with non-IID data demands more comprehensive gradients. To address the dilemma, we propose a two-stage weighted K asynchronous FL with adaptive learning rate (WKAFL). By selecting consistent gradients and adjusting learning rate adaptively, WKAFL utilizes stale gradients and mitigates the impact of non-IID data, which can achieve multifaceted enhancement in training speed, prediction accuracy and training stability. We also present the convergence results for WKAFL under the assumption of unbounded staleness to understand the impact of staleness and non-IID data. Experiments implemented on two benchmark FL datasets and two synthetic FL datasets show that WKAFL has the best overall performance compared with existing algorithms.
               
Click one of the above tabs to view related content.