Many machine learning algorithms have been developed under the assumption that datasets are already available in batch form. Yet, in many application domains, data are only available sequentially overtime via… Click to show full abstract
Many machine learning algorithms have been developed under the assumption that datasets are already available in batch form. Yet, in many application domains, data are only available sequentially overtime via compute nodes in different geographic locations. In this article, we consider the problem of learning a model when streaming data cannot be transferred to a single location in a timely fashion. In such cases, a distributed architecture for learning which relies on a network of interconnected “local” nodes is required. We propose a distributed scheme in which every local node implements stochastic gradient updates based upon a local data stream. To ensure robust estimation, a network regularization penalty is used to maintain a measure of cohesion in the ensemble of models. We show that the ensemble average approximates a stationary point and characterizes the degree to which individual models differ from the ensemble average. We compare the results with federated learning to conclude that the proposed approach is more robust to heterogeneity in data streams (data rates and estimation quality). We illustrate the results with an application to image classification with a deep learning model based upon convolutional neural networks.
               
Click one of the above tabs to view related content.