A batch variable learning rate gradient descent algorithm is proposed to efficiently train a neuro-fuzzy network of zero-order Takagi-Sugeno inference systems. By using the advantages of regularization, the smoothing $L_{1/2}$… Click to show full abstract
A batch variable learning rate gradient descent algorithm is proposed to efficiently train a neuro-fuzzy network of zero-order Takagi-Sugeno inference systems. By using the advantages of regularization, the smoothing $L_{1/2}$ regularization is utilized to find more appropriate sparse network. Combining the second-order information of the smoothing error function, a variable learning rate is chosen along the steep descent direction, which avoids line search procedure and may reduce the cost of computation. In order to appropriately adjust the Lipschitz constant of the smoothing error function in the learning rate, a new scheme is proposed by introducing a hyper-parameter. Also the article applies the modified secant equation for estimating the Lipschitz constant, which makes the algorithm greatly reduce the oscillating phenomenon and improve the robustness. Under appropriate assumptions, a convergent result of the proposed algorithm is also given. Simulation results for two identification and classification problems show that the proposed algorithm has better numerical performance and promotes the sparsity capability of the network, compared with the common batch gradient descent algorithm and a variable learning rate gradient-based algorithm.
               
Click one of the above tabs to view related content.