Deep neural networks (DNN) have been applied to numerous artificial-intelligence applications because of their remarkable accuracy. However, computational requirements for deep neural networks are recently skyrocketing far beyond the Moore’s… Click to show full abstract
Deep neural networks (DNN) have been applied to numerous artificial-intelligence applications because of their remarkable accuracy. However, computational requirements for deep neural networks are recently skyrocketing far beyond the Moore’s Law. In addition to the importance of accuracy, the industry’s demand for efficiency in model learning process is increasing. This has led to various attempts to make DNNs more lightweight. Hence, we propose a modeling technique that applies lightweight convolutional neural networks (CNN) to handle the model-learning processes for DNNs. The proposed spatial-shift pointwise quantization (SSPQ) model elegantly combines compact network-design techniques to revitalize DNN quantization efficiency with little accuracy loss. We set the depths of our SSPQ model to 20, 34, and 50 to test against CIFAR10, CIFAR100, and ImageNet datasets, respectively. By applying SSPQ20 to the CIFAR10 dataset, we reduced accuracy degradation by 2.95%, while reducing the number of parameters
               
Click one of the above tabs to view related content.