High performance and, at the same time, energy efficiency are important yet often conflicting requirements in many fields of emerging applications. Those applications range from multi-dimensional and multi-sensor digital signal… Click to show full abstract
High performance and, at the same time, energy efficiency are important yet often conflicting requirements in many fields of emerging applications. Those applications range from multi-dimensional and multi-sensor digital signal processing to machine learning, such as neural network processing. Whereas conventional fixed-point and floating-point processor architectures cannot adapt to quite diverging demands related to required precision and accuracy of computations, even within a single application, e.g., in different layers of a neural network, domain-specific accelerators may be much too specific and thus rigid to cover a wide enough spectrum of applications. In this tutorial brief, we give an overview of existing processor solutions that are reconfigurable or tunable in precision or accuracy of computations. The spectrum of reviewed architectures ranges from processors with vectorizable processors over multi- and trans-precision solutions, including GPUs to any-time instruction-set processors. The latter works with a fixed precision, but the accuracy of the result of floating-point operations is encoded in the instruction word. It can thus vary from instruction to instruction. This allows realizing accuracy vs. execution time or energy tradeoffs. Subsequently, we investigate several application domains, including neural network processing, linear algebra, and approximate computing, where such emerging processor architectures can be beneficially used.
               
Click one of the above tabs to view related content.