Sign Up to like & get
recommendations!
2
Published in 2022 at "IEEE Transactions on Parallel and Distributed Systems"
DOI: 10.1109/tpds.2022.3161187
Abstract: Scaling deep neural network training to more processors and larger batch sizes is key to reducing end-to-end training time; yet, maintaining comparable convergence and hardware utilization at larger scales is a challenge. Increases in training…
read more here.
Keywords:
neural network;
training;
deep neural;
training distributed ... See more keywords