Resistive switching random access memory (RRAM) has been explored to accelerate the computation of neural networks. RRAM with linear conductance modulation is usually required for the efficient weight updating during… Click to show full abstract
Resistive switching random access memory (RRAM) has been explored to accelerate the computation of neural networks. RRAM with linear conductance modulation is usually required for the efficient weight updating during the online training according to the back-propagation algorithm. However, most RRAM devices usually show the nonlinear characteristic. Here, to overcome the dilemma, we designed a novel weight updating principle for binarized neural networks, which enables the nonlinear RRAM to realize the weight updating in efficiency during online training. Moreover, a vector-matrix multiplication is designed to parallel calculate the dot-products of the forward and backward propagation. 1 kb nonlinear RRAM array is fabricated to demonstrate the feasibility of the analog accumulation and the parallel vector-matrix multiplication. The results achieved in this work offer new solutions for future energy efficient neural networks.Resistive switching random access memory (RRAM) has been explored to accelerate the computation of neural networks. RRAM with linear conductance modulation is usually required for the efficient weight updating during the online training according to the back-propagation algorithm. However, most RRAM devices usually show the nonlinear characteristic. Here, to overcome the dilemma, we designed a novel weight updating principle for binarized neural networks, which enables the nonlinear RRAM to realize the weight updating in efficiency during online training. Moreover, a vector-matrix multiplication is designed to parallel calculate the dot-products of the forward and backward propagation. 1 kb nonlinear RRAM array is fabricated to demonstrate the feasibility of the analog accumulation and the parallel vector-matrix multiplication. The results achieved in this work offer new solutions for future energy efficient neural networks.
               
Click one of the above tabs to view related content.