Considering the wide application of multiple types of sensors with diversified data sensing and collection capabilities, we focus on the resulting hybrid data partitioning among the local data set distributed… Click to show full abstract
Considering the wide application of multiple types of sensors with diversified data sensing and collection capabilities, we focus on the resulting hybrid data partitioning among the local data set distributed at the edge sensors, especially the practical training implementation of federated learning (FL) under such a setting, where the neural network (NN) is trained collaboratively without requiring the sensors to share their data. Different from the conventional FL schemes, since each local sensor now only has partial data samples with type-specific features, the traditional stochastic gradient descent (SGD)-based training method cannot be directly utilized due to the intertype and intratype data coupling. To address this issue, we first transform the training problem into the primal–dual domain utilizing the corresponding Lagrangian and propose a stochastic primal-descent dual-ascent training method with a two-side residual feedback mechanism. Such a method can be implemented in a scalable way and compensate for the data distortion and loss caused by the practical transmission noise. Furthermore, a decentralized joint scheduling, bandwidth allocation, and dynamic quantization policy is proposed by analyzing the performance at each training iteration and the consumed transmission resources. The proposed method is adaptive to not only the channel state information (CSI) but also the instantaneous gradient importance and dynamic gradient statistics. The closed-form convergence analysis is provided, and the simulation experiments illustrate the superior performance of the proposed scheme.
               
Click one of the above tabs to view related content.