Sign Up to like & get
recommendations!
1
Published in 2022 at "IEEE Transactions on Parallel and Distributed Systems"
DOI: 10.1109/tpds.2022.3154387
Abstract: Asynchronous training is widely used for scaling DNN training over large-scale distributed deep learning systems using the parameter server architecture. Communication has been identified as the bottleneck, since large volumes of data are exchanged during…
read more here.
Keywords:
sparsification;
mipd adaptive;
framework;
sparsification framework ... See more keywords