Articles with "per iteration" as a keyword



Photo by nixcreative from unsplash

Matcha: A Matching-Based Link Scheduling Strategy to Speed up Distributed Optimization

Sign Up to like & get
recommendations!
Published in 2022 at "IEEE Transactions on Signal Processing"

DOI: 10.1109/tsp.2022.3212536

Abstract: In this paper, we study the problem of distributed optimization using an arbitrary network of lightweight computing nodes, where each node can only send/receive information to/from its direct neighbors. Decentralized stochastic gradient descent (SGD) has… read more here.

Keywords: topology; matcha; per iteration; distributed optimization ... See more keywords
Photo from wikipedia

Straggler-Aware Distributed Learning: Communication–Computation Latency Trade-Off

Sign Up to like & get
recommendations!
Published in 2020 at "Entropy"

DOI: 10.3390/e22050544

Abstract: When gradient descent (GD) is scaled to many parallel workers for large-scale machine learning applications, its per-iteration computation time is limited by straggling workers. Straggling workers can be tolerated by assigning redundant computations and/or coding… read more here.

Keywords: latency; per iteration; learning; computation ... See more keywords