This paper presents a learning algorithm for a vector-matrix multiplier (VMM) + k-winner-take-all (WTA) classifier one-layer architecture on a large-scale field programmable analog array (FPAA). The technique enables opportunities for… Click to show full abstract
This paper presents a learning algorithm for a vector-matrix multiplier (VMM) + k-winner-take-all (WTA) classifier one-layer architecture on a large-scale field programmable analog array (FPAA). The technique enables opportunities for embedded, ultra-low power machine learning, techniques typically considered for large servers. To develop this training algorithm, this paper starts by understanding fundamental equivalent transformations for the VMM + WTA classifier networks. A VMM+ WTA structure can exactly compute a self-organizing map (SOM) or vector quantization (VQ) operation, in addition to other transformations. SOM, VQ, and Gaussian mixture models learning concepts are utilized for the training algorithm of this single one-layer network. An on-chip clustering step determines the initial weight set for ideal target and background values. Null symbols are important for the algorithm and are set from midpoints of the target values. The results are shown both as numerical simulation of the VMM+WTA learning network, illustrating some numerical differential equation simulation limitations for this problem, as well as experimental measurements implemented on an system on chip FPAA device.
               
Click one of the above tabs to view related content.