LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Exploiting Sparse Self-Representation and Particle Swarm Optimization for CNN Compression.

Photo from wikipedia

Structured pruning has received ever-increasing attention as a method for compressing convolutional neural networks. However, most existing methods directly prune the network structure according to the statistical information of the… Click to show full abstract

Structured pruning has received ever-increasing attention as a method for compressing convolutional neural networks. However, most existing methods directly prune the network structure according to the statistical information of the parameters. Besides, these methods differentiate the pruning rates only in each pruning stage or even use the same pruning rate across all layers, rather than using learnable parameters. In this article, we propose a network redundancy elimination approach guided by the pruned model. Our proposed method can easily tackle multiple architectures and is scalable to the deeper neural networks because of the use of joint optimization during the pruning procedure. More specifically, we first construct a sparse self-representation for the filters or neurons of the well-trained model, which is useful for analyzing the relationship among filters. Then, we employ particle swarm optimization to learn pruning rates in a layerwise manner according to the performance of the pruned model, which can determine optimal pruning rates with the best performance of the pruned model. Under this criterion, the proposed pruning approach can remove more parameters without undermining the performance of the model. Experimental results demonstrate the effectiveness of our proposed method on different datasets and different architectures. For example, it can reduce 58.1% FLOPs for ResNet50 on ImageNet with only a 1.6% top-five error increase and 44.1% FLOPs for FCN_ResNet50 on COCO2017 with a 3% error increase, outperforming most state-of-the-art methods.

Keywords: optimization; swarm optimization; particle swarm; self representation; sparse self

Journal Title: IEEE transactions on neural networks and learning systems
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.