Thanks to their event-driven nature, spiking neural networks (SNNs) are surmised to be great computation-efficient models. The spiking neurons encode beneficial temporal facts and possess excessive anti-noise properties. However, the… Click to show full abstract
Thanks to their event-driven nature, spiking neural networks (SNNs) are surmised to be great computation-efficient models. The spiking neurons encode beneficial temporal facts and possess excessive anti-noise properties. However, the high-quality encoding of spatio-temporal complexity and also its training optimization of SNNs are restricted by means of the contemporary problem, this article proposes a novel hierarchical event-driven visual device to explore how information transmits and signifies in the retina the usage of biologically manageable mechanisms. This cognitive model is an augmented spiking-based framework consisting of the function learning capacity of convolutional neural networks (CNNs) with the cognition capability of SNNs. Furthermore, this visual device is modeled in a biological realism way with unsupervised learning rules and advanced spike firing rate encoding methods. We train and test them on some image datasets (Modified National Institute of Standards and Technology (MNIST), Canadian Institute for Advanced Research (CIFAR)10, and its noisy versions) to show that our mannequin can process greater vital data than present cognitive models. This article also proposes a novel quantization approach to make the proposed spiking-based model more efficient for neuromorphic hardware implementation. The outcomes show this joint CNN-SNN model can reap excessive focus accuracy and get more effective generalization ability.
               
Click one of the above tabs to view related content.