Sign Up to like & get
recommendations!
1
Published in 2018 at "Neurocomputing"
DOI: 10.1016/j.neucom.2018.01.072
Abstract: Abstract Parallelization framework has become a necessity to speed up the training of deep neural networks (DNN) recently. In the typical parallelization framework, called MA-DNN, the parameters of local models are periodically averaged to get…
read more here.
Keywords:
training deep;
deep neural;
neural networks;
global model ... See more keywords
Sign Up to like & get
recommendations!
1
Published in 2022 at "IEEE Journal of Biomedical and Health Informatics"
DOI: 10.1109/jbhi.2022.3148944
Abstract: Unavailability of large training datasets is a bottleneck that needs to be overcome to realize the true potential of deep learning in histopathology applications. Although slide digitization via whole slide imaging scanners has increased the…
read more here.
Keywords:
time;
gaze based;
gaze labeling;
training deep ... See more keywords
Sign Up to like & get
recommendations!
1
Published in 2018 at "IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems"
DOI: 10.1109/tcad.2018.2858358
Abstract: This paper presents, DeepTrain, an embedded platform for high-performance and energy-efficient training of deep neural network (DNN). The key architectural concept of DeepTrain is to develop a spatially homogeneous computing (and memory) fabric with temporally…
read more here.
Keywords:
training deep;
embedded platform;
memory;
deep neural ... See more keywords
Sign Up to like & get
recommendations!
2
Published in 2022 at "IEEE transactions on neural networks and learning systems"
DOI: 10.1109/tnnls.2021.3131813
Abstract: Recent deep neural networks (DNNs) with several layers of feature representations rely on some form of skip connections to simultaneously circumnavigate optimization problems and improve generalization performance. However, the operations of these models are still…
read more here.
Keywords:
dnns skip;
network;
skip connections;
deep neural ... See more keywords
Sign Up to like & get
recommendations!
1
Published in 2018 at "Indonesian Journal of Electrical Engineering and Computer Science"
DOI: 10.11591/ijeecs.v11.i3.pp954-961
Abstract: Deep Neural Network training algorithms consumes long training time, especially when the number of hidden layers and nodes is large. Matrix multiplication is the key operation carried out at every node of each layer for…
read more here.
Keywords:
training deep;
deep neural;
novel approach;
approach ... See more keywords