LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

SparseNN: A Performance-Efficient Accelerator for Large-Scale Sparse Neural Networks

Photo by jordanmcdonald from unsplash

Neural networks have been widely used as a powerful representation in various research domains, such as computer vision, natural language processing, and artificial intelligence, etc. To achieve better effect of… Click to show full abstract

Neural networks have been widely used as a powerful representation in various research domains, such as computer vision, natural language processing, and artificial intelligence, etc. To achieve better effect of applications, the increasing number of neurons and synapses make neural networks both computationally and memory intensive, furthermore difficult to deploy on resource-limited platforms. Sparse methods can reduce redundant neurons and synapses, but conventional accelerators cannot benefit from the sparsity. In this paper, we propose an efficient accelerating method for sparse neural networks, which compresses synapse weights and processes the compressed structure by an FPGA accelerator. Our method will achieve 40 and 20% compression ratio of synapse weights in convolutional and full-connected layers. The experiment results demonstrate that our accelerating method can boost an FPGA accelerator to achieve 3$$\times $$× speedup over a conventional one.

Keywords: neural networks; sparsenn performance; sparse neural; performance efficient; accelerator; efficient accelerator

Journal Title: International Journal of Parallel Programming
Year Published: 2017

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.