Sign Up to like & get
recommendations!
1
Published in 2019 at "Neurocomputing"
DOI: 10.1016/j.neucom.2018.11.002
Abstract: We address the issue of speeding up the training of convolutional neural networks by studying a distributed method adapted to stochastic gradient descent. Our parallel optimization setup uses several threads, each applying individual gradient descents…
read more here.
Keywords:
gosgd;
deep learning;
optimization;
distributed optimization ... See more keywords