LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Efficient Neuromorphic Hardware Through Spiking Temporal Online Local Learning

Photo by framesforyourheart from unsplash

Local learning schemes have shown promising performance in spiking neural networks (SNNs) training and are considered a step toward more biologically plausible learning. Despite many efforts to design high-performance neuromorphic… Click to show full abstract

Local learning schemes have shown promising performance in spiking neural networks (SNNs) training and are considered a step toward more biologically plausible learning. Despite many efforts to design high-performance neuromorphic systems, a fast and efficient on-chip training algorithm is still missing, which limits the deployment of neuromorphic systems in many real-time applications. This work proposes a scalable, fast, and efficient spiking neuromorphic hardware system with on-chip local learning capability. We introduce an effective hardware-friendly local training algorithm compatible with sparse temporal input coding and binary random classification weights. The algorithm is demonstrated to deliver competitive accuracy in different tasks. The proposed digital system explores spike sparsity in communication, parallelism in vector–matrix operations and process-level dataflow, and locality of training errors, which leads to low cost and fast training speed. The system is optimized under various performance metrics. Taking into consideration energy, speed, resources, and accuracy, the proposed method shows around $10\times $ efficiency over a recent work with a direct feedback alignment (DFA) method and $4.5\times $ efficiency over the spike-timing-dependent plasticity (STDP) method. Moreover, our hardware architecture can easily scale up with the network size at a linear rate. Thus, our method has demonstrated great potential for use in various applications, especially those demanding low latency.

Keywords: neuromorphic hardware; hardware; training; local learning; efficient neuromorphic; hardware spiking

Journal Title: IEEE Transactions on Very Large Scale Integration (VLSI) Systems
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.