LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Hardware Acceleration of a Generalized Fast 2-D Convolution Method for Deep Neural Networks

Photo by framesforyourheart from unsplash

The hardware acceleration of Deep Neural Networks (DNN) is a highly effective and viable solution for running them on mobile devices. The power of DNNs is now available at the… Click to show full abstract

The hardware acceleration of Deep Neural Networks (DNN) is a highly effective and viable solution for running them on mobile devices. The power of DNNs is now available at the edge in a compact and power-efficient form factor with the aid of hardware acceleration. In this paper, we introduce an architecture that uses a generalized method called Single Partial Product 2-Dimensional Convolution (SPP2D Convolution) which calculates a 2-D convolution in a fast and expedient manner. We demonstrate that the SPP2D architecture prevents the re-fetching of input weights for the calculation of partial products, and it can calculate the output of any input size and kernel with low latency and high throughput compared to other popular techniques. SPP2D based architecture can reduce the memory access and execution time related to input reuse by at least three times in comparison with the work done in Ardakani et al. (2018) and approximately nine times that of the standard sliding window approach. We have implemented the generalized SPP2D architecture on the Xilinx KC705 Kintex-7 evaluation board to illustrate that the new SPP2D algorithm is well-suited for the hardware acceleration of DNNs. We implemented LeNet-5 and VGGNet-16 using the SPP2D architecture. We demonstrate that the SPP2D based LeNet-5 has a high throughput of 5 GOP/s and 14.8 GOP/s/W and 42 GOP/s/W for the convolution operation using the SPP2D IP. Our LeNet-5 design achieves a similar throughput to Zhou and Jiang (2015) however using $\mathbf {3.3}\times $ fewer DSPs and an even smaller memory and lookup table (LUT) footprint. The SPP2D based VGGNet-16 network has a latency of 91.3 ms which is 79%,97%, 17% and 95% less than contemporary designs respectively, while running at a low power of 298 mW which is similar to the power level of these designs. The total processing time of our design with a parallelism factor of nine is 3.93 secs and it is 70% less than that in Ardakani et al. (2018) and 24% less than that in Panchbhaiyye and Ogunfunmi (2021). The SPP2D based LeNet-5 and VGGNet-16 accelerators provide a low-latency design with reduced memory access thus leading to a low-power design. As a result, SPP2D convolution is very well suited for hardware acceleration of DNNs.

Keywords: convolution; spp2d; hardware acceleration; bold bold

Journal Title: IEEE Access
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.