In the hardware implementation of deep learning algorithms such as, convolutional neural networks (CNNs) and binarized neural networks (BNNs), multiple dot products and memories for storing parameters take a significant… Click to show full abstract
In the hardware implementation of deep learning algorithms such as, convolutional neural networks (CNNs) and binarized neural networks (BNNs), multiple dot products and memories for storing parameters take a significant portion of area and power consumption. In this paper, we propose a domain wall memory (DWM) based design of CNN and BNN convolutional layers. In the proposed design, the resistive cell sensing mechanism is efficiently exploited to design low-cost DWM-based cell arrays for storing parameters. The unique serial access mechanism and small footprint of DWM are also used to reduce the area and energy cost of DWM-based design for filter sliding. Simulation results with 65 nm CMOS process show 45% and 43% of energy savings compared to the conventional CNN and BNN design approach, respectively.
               
Click one of the above tabs to view related content.