Sign Up to like & get
recommendations!
3
Published in 2022 at "IEEE/ACM Transactions on Networking"
DOI: 10.1109/tnet.2021.3112082
Abstract: Due to the massive size of the neural network models and training datasets used in machine learning today, it is imperative to distribute stochastic gradient descent (SGD) by splitting up tasks such as gradient evaluation…
read more here.
Keywords:
volatile instances;
convergence;
cost;
machine learning ... See more keywords