Deep neural networks (DNNs) is one of the most popular machine learning methods and is widely used in many modern applications. The training process of DNNs is a time-consuming process.… Click to show full abstract
Deep neural networks (DNNs) is one of the most popular machine learning methods and is widely used in many modern applications. The training process of DNNs is a time-consuming process. Accelerating the training of DNNs has been the focus of many research works. In this paper, we speed up the training of DNNs applied for automatic speech recognition and the target architecture is heterogeneous (CPU + MIC). We apply asynchronous methods for I/O and communication operations and propose an adaptive load balancing method. Besides, we also employ a momentum idea to speed up the convergence of the gradient descent algorithm. Experimental results show that our optimized algorithm is able to acquire a 20-fold speedup on a CPU + MIC platform compared with the original sequential algorithm on a single-core CPU.
               
Click one of the above tabs to view related content.