LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles.
Sign Up to like articles & get recommendations!
Varying Infimum Gradient Descent Algorithm for Agent-Sever Systems Using Different Order Iterative Preconditioning Methods
Photo from wikipedia
In the traditional gradient descent (T-GD) algorithm, the convergence rate is strongly depend on the condition number of the information matrix: a larger condition number leads to a poor optimal… Click to show full abstract
In the traditional gradient descent (T-GD) algorithm, the convergence rate is strongly depend on the condition number of the information matrix: a larger condition number leads to a poor optimal convergence factor infimum $\mu _{\text{op}}$, which sets a convergence rate ceiling. That is, once the information matrix is fixed, the convergence factor of the T-GD algorithm reaches at most the infimum $\mu _{\text{op}}$. This article studies a varying infimum gradient descent algorithm, which can move down the infimum by using different order iterative preconditioning methods, as follows: first, for infinite iterative algorithm, the infimum becomes smaller and smaller with the increased iteration numbers; second, for finite iterative algorithm, the infimum is equal to zero, and the parameter estimates can be obtained in only one iteration; third, construct an adaptive interval between zero and $\mu _{\text{op}}$, which can establish a link between the least squares and T-GD algorithms. Based on the varying infimum gradient descent algorithm, researchers can adaptively choose preconditioning matrices for different kinds of models on a case by case basis. The convergence analysis and simulation examples show effectiveness of the proposed algorithms.
Share on Social Media:
  
        
        
        
Sign Up to like & get recommendations! 1
Related content
More Information
            
News
            
Social Media
            
Video
            
Recommended
               
Click one of the above tabs to view related content.