The softmax function has been widely used in deep neural networks (DNNs), and studies on efficient hardware accelerators for DNN have also attracted tremendous attention. However, it is very challenging… Click to show full abstract
The softmax function has been widely used in deep neural networks (DNNs), and studies on efficient hardware accelerators for DNN have also attracted tremendous attention. However, it is very challenging to design efficient hardware architectures for softmax because of the expensive exponentiation and division calculations in it. In this brief, the softmax function is firstly simplified by exploring algorithmic strength reductions. Afterwards, a hardware-friendly and precision-adjustable calculation method for softmax is proposed, which can meet different precision requirements in various deep learning (DL) tasks. Based on the above innovations, an efficient architecture for softmax is presented. By tuning the parameter
               
Click one of the above tabs to view related content.