The effective implementation of quantization depends not only on the specific task but also on the hardware resources. This article presents a hardware-aware customized quantization method for convolutional neural networks.… Click to show full abstract
The effective implementation of quantization depends not only on the specific task but also on the hardware resources. This article presents a hardware-aware customized quantization method for convolutional neural networks. We propose a learnable parameter soft clipping full integer quantization (LSFQ), which includes weight and activation quantization with the learnable clipping parameters. Moreover, the LSFQ accelerator architecture is customized on the field-programmable gate array (FPGA) platform to verify the hardware awareness of our method, in which DSP48E2 is designed to realize the parallel computation of six low-bit integer multiplications. The results showed that the accuracy loss of LSFQ is less than 1% compared with the full-precision models including VGG7, mobile-net v2 in CIFAR10, and CIFAR100. An LSFQ accelerator was demonstrated at the 57th IEEE/ACM Design Automation Conference System Design Contest (DAC-SDC) and won the championship at the FPGA track.
               
Click one of the above tabs to view related content.