Four gradient-based recurrent neural networks for computing the Drazin inverse of a square real matrix are developed. Theoretical analysis shows that any monotonically-increasing odd activation function ensures the global convergence… Click to show full abstract
Four gradient-based recurrent neural networks for computing the Drazin inverse of a square real matrix are developed. Theoretical analysis shows that any monotonically-increasing odd activation function ensures the global convergence performance of defined neural network models. The computer simulation results further substantiate that the considered neural networks could compute the Drazin inverse with accuracy and effectiveness. Moreover, the presented neural networks show superior convergence in the case when the power-sigmoid activation functions are used compared to linear models.
               
Click one of the above tabs to view related content.