Nanoscale memristors in a crossbar configuration have demonstrated their ability to accelerate modern computing workloads in various applications, including machine learning, image processing, and data analytics. Modeling the crossbar behavior… Click to show full abstract
Nanoscale memristors in a crossbar configuration have demonstrated their ability to accelerate modern computing workloads in various applications, including machine learning, image processing, and data analytics. Modeling the crossbar behavior is critical to software-hardware co-design, but most previous works focused on single or several memristor devices. So, challenges still exist in large-scale crossbar implementations due to non-idealities that only emerge at the system level. In this brief, we build a crossbar model based on experimentally characterized device statistics in large crossbar arrays. We identify different types of imperfections, including statistical device relaxation, fluctuation, peripheral circuits, etc. The experimentally-validated model is then used to co-optimize analog matrix multiplication and neural network applications. Specifically, we propose and implement defect-aware training and verify that the neural network trained with our algorithm can provide better accuracy and reliability when deployed to physical crossbars. Finally, we achieve an experimental accuracy of 93.4% on the MNIST dataset in a physical crossbar by training based on our crossbar model to compensate for statistical stochasticity, 8.4% higher than the vanilla model. More importantly, the accuracy remains larger than 90.0% after two days, while the accuracy with the vanilla model drops to 73.2% because of conductance relaxation. The method is also scaleable to more practical networks and demonstrates a 92.3% CIFAR-10 accuracy with the VGG-16 model on the simulated crossbar model.
               
Click one of the above tabs to view related content.