LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

TSUNAMI: Triple Sparsity-Aware Ultra Energy-Efficient Neural Network Training Accelerator With Multi-Modal Iterative Pruning

Photo from wikipedia

This article proposes the TSUNAMI, which supports an energy-efficient deep-neural-network training. The TSUNAMI supports multi-modal iterative pruning to generate zeros in activation and weight. Tile-based dynamic activation pruning unit and… Click to show full abstract

This article proposes the TSUNAMI, which supports an energy-efficient deep-neural-network training. The TSUNAMI supports multi-modal iterative pruning to generate zeros in activation and weight. Tile-based dynamic activation pruning unit and weight memory shared pruning unit eliminate additional memory access. Coarse-zero skipping controller skips multiple unnecessary multiply-and-accumulation (MAC) operations at once, and fine-zero skipping controller skips randomly located unnecessary MAC operations. Weight sparsity balancer solves a utilization degradation caused by weight sparsity imbalance, and the workload of each convolution core is allocated by a random channel allocator. The TSUNAMI achieves an energy efficiency of 3.42 TFLOPS/W at 0.78V and 50MHz with floating-point 8-bit activation and weight. Also, it achieves an energy efficiency of 405.96 TFLOPS/W at 90% sparsity condition.

Keywords: neural network; sparsity; energy; network training; tsunami; energy efficient

Journal Title: IEEE Transactions on Circuits and Systems I: Regular Papers
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.