In recent years, the use of artificial neural network applications to perform object classification and event prediction has increased, mainly from research about deep learning techniques running on hardware such… Click to show full abstract
In recent years, the use of artificial neural network applications to perform object classification and event prediction has increased, mainly from research about deep learning techniques running on hardware such as GPU and FPGA. The interest in the use of neural networks extends to embedded systems, due to the development of applications in smart mobile devices, such as cell phones, drones, autonomous cars and industrial robots. But when it comes to embedded systems, it is necessary to observe the hardware limitations, such as memory, scalability and power consumption, which significantly impact the processing of a neural network. In this article, a methodology is proposed that allows to reduce a spiking neural network, applying the discrete cosine transform (DCT) and elegant pairing, contributing to the scalability of the neural network layers in hardware. The results demonstrate the effectiveness of the methodology, showing the feasibility of reducing synapses and neurons, while maintaining the correctness of the spiking neural network response.
               
Click one of the above tabs to view related content.