Nowadays, it has become well known that efficient training of deep neural networks plays a vital role in various successful applications. To achieve this goal, it is impractical to use… Click to show full abstract
Nowadays, it has become well known that efficient training of deep neural networks plays a vital role in various successful applications. To achieve this goal, it is impractical to use only one computer, especially when the scale of models is large and some efficient computing resources are available. In this paper, we present a distributed parallel computing framework for training deep belief networks (DBNs) by employing the great power of high-performance clusters (i.e., a system consists of many computers). Motivated by the greedy layer-wise learning algorithm of DBNs, the whole training process is divided layer by layer and distributed to different machines. At the same time, rough representations are exploited to parallelize the training process. By conducting experiments on several large-scale real datasets, the novel algorithms are shown to significantly accelerate the training speed of DBNs while achieving better or competitive prediction accuracy in comparison with the original algorithm.
               
Click one of the above tabs to view related content.