Learning vector quantization (LVQ) neural networks have already been successfully applied for image compression and object recognition. In this paper, we propose a modular and reconfigurable pipeline architecture (MRPA) for… Click to show full abstract
Learning vector quantization (LVQ) neural networks have already been successfully applied for image compression and object recognition. In this paper, we propose a modular and reconfigurable pipeline architecture (MRPA) for LVQ. The MRPA consists of dynamically reconfigurable modules and realizes a run-time and on-chip configuration for recognition and learning. Prototype fabrication in 65-nm CMOS technology verifies high integration density and memory-utilization efficiency, good performance, and considerable flexibility in vector dimensionality, number of weight-vectors, and adaption strategies. Compared with the embedded microprocessors, which rely on single-instruction-multiple-data processing, the developed prototype increases the performance of both recognition and learning operations. The MRPA prototype shows improvements by factors of approximately 40 and 101 on the well-established performance metrics million connections per second for recognition and million connection updates per second for learning, respectively.
               
Click one of the above tabs to view related content.