Hardware-based neural networks are becoming attractive because of their superior performance. One of the research challenges is to design such hardware using less area to minimize the cost of on-chip… Click to show full abstract
Hardware-based neural networks are becoming attractive because of their superior performance. One of the research challenges is to design such hardware using less area to minimize the cost of on-chip implementation. This brief proposes an area-efficient implementation of an Artificial Neural Network (ANN). The proposed method reduces the number of layers in the ANN by nearly half through a novel, dual use of some layers denoted as hidden layers. The hidden layers are non-traditional layers proposed in this brief. They are adaptable, and each such layer performs two separate functions through judicious use of different weights. Thus, each hidden or flexible layer does the work of two traditional ANN layers. The other type of layers used in the proposed design is the fixed layers that are used traditionally. The fixed layers are not flexible and serve the functionality of a single layer. The proposed design keeps the number of the fixed layers as low as possible. One or more fixed layers may still be needed for some applications besides the proposed flexible layers. The proposed method is implemented in Verilog HDL on Altera Arria 10 GX FPGA. Its area usage is only 41% of the state-of-the-art method, while its power consumption and speed overheads are small.
               
Click one of the above tabs to view related content.