With advancements in machine learning technology, networks are becoming increasingly complex, and the extent of the computation involved is increasing. Consequently, the computation time and power consumption of the learning… Click to show full abstract
With advancements in machine learning technology, networks are becoming increasingly complex, and the extent of the computation involved is increasing. Consequently, the computation time and power consumption of the learning process are increased. The error tolerance of neural networks has attracted attention as an approach to solving this problem. Because neural networks can tolerate small errors, it is possible to reduce the calculation speed and power consumption at the expense of accuracy. In this study, we propose a method to reduce the power consumption of the circuit by lowering the operating voltage of the static random-access memory (SRAM) that is utilized to store the weights. In the proposed method, using two different operating voltages of SRAM, we used different bit error rates (BERs) for error-tolerant and non-error-tolerant. We demonstrated the relationship between the BER and recognition rate, and the appropriate combination of the BER and circuit configuration that maintains a high recognition rate.
               
Click one of the above tabs to view related content.