Distributed machine learning has been extensively employed in wireless systems, which can leverage abundant data distributed over massive devices to collaboratively train a high-quality global model. The research efforts of… Click to show full abstract
Distributed machine learning has been extensively employed in wireless systems, which can leverage abundant data distributed over massive devices to collaboratively train a high-quality global model. The research efforts of recent works have focused on improving performance (e.g., communication efficiency, energy efficiency, and scalability) of centralized architectures, which include a number of distributed devices and a server. However, centralized architectures may cause congestion at the central node, which is not applicable under some circumstances. To tackle this issue, we introduce a decentralized edge learning framework over wireless networks via unreliable device-to-device (D2D) links and improve its learning performance. The unreliable transmission caused by the channel uncertainty has a negative effect on model convergence. To enhance the performance, we formulate an optimization problem to minimize the overall model deviation under a given latency requirement by jointly optimizing the broadcast data rate and bandwidth allocation. Then, the optimal solution of broadcast data rate is derived and an algorithm for obtaining the optimal bandwidth allocation is developed. Besides, we also propose a decentralized edge learning protocol without a central server and provide the convergence analysis. Finally, extensive simulations are conducted to demonstrate the performance advantages of our proposed algorithm compared against the baseline algorithm.
               
Click one of the above tabs to view related content.