LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Aggressive Approximation of the SoftMax Function for Power-Efficient Hardware Implementations

Photo by framesforyourheart from unsplash

Neural Network models most often exploit the SoftMax function in the classification stage for computing probabilities through exponentiation and division operations. To reduce the complexity and the energy consumption of… Click to show full abstract

Neural Network models most often exploit the SoftMax function in the classification stage for computing probabilities through exponentiation and division operations. To reduce the complexity and the energy consumption of such stage, several hardware-friendly approximation strategies have been disclosed in the recent past. This brief evaluates the effects of an aggressive approximation of the SoftMax layer on both classification accuracy and hardware characteristics. Experimental results demonstrate that the proposed circuit, when implemented in a 28 nm FDSOI technology, saves ~65% of silicon area with respect to competitors, dissipating less than 1 pJ. FPGA implementation results confirm a massive energy dissipation reduction with respect to the conventional baseline architecture, without introducing penalties in the Top-1 accuracy.

Keywords: hardware; approximation; aggressive approximation; approximation softmax; softmax function

Journal Title: IEEE Transactions on Circuits and Systems II: Express Briefs
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.