Training neural networks with the Moore–Penrose (MP) inverse has recently gained attention in view of its noniterative training nature. However, a significant drawback of learning based on the MP inverse… Click to show full abstract
Training neural networks with the Moore–Penrose (MP) inverse has recently gained attention in view of its noniterative training nature. However, a significant drawback of learning based on the MP inverse is that the computational memory consumption grows along with the size of a dataset. In this article, based on the partitioning of the MP inverse, we propose a blockwise recursive MP inverse formulation (BRMP) for network learning with low-memory property while preserving its training effectiveness. The BRMP is an equivalent formulation to its batchwise counterpart since neither approximation nor assumption is made in the derivation process. Our further exploration of this recursive method leads to a switching structure among three different scenarios. This structure also reveals that the well-known recursive least squares method is a special case of our proposed technique. Subsequently, we apply BRMP to the training of radial basis function networks as well as multilayer perceptrons. The experimental validation covers both regression and classification tasks.
               
Click one of the above tabs to view related content.