LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

An Adaptive Optimization Algorithm Based on Hybrid Power and Multidimensional Update Strategy

Photo from wikipedia

Recently, the adaptive learning rate optimization algorithm has shown excellent performances in the field of deep learning. However, the exponential moving average method can lead to convergence problem in some… Click to show full abstract

Recently, the adaptive learning rate optimization algorithm has shown excellent performances in the field of deep learning. However, the exponential moving average method can lead to convergence problem in some cases, such as it only converges to the sub-optimal minimum. Although AMSGrad algorithm provides solutions for convergence problems, the actual performance is close to or even weaker than that of Adam. In this paper, a new updating rule is proposed based on mixed high power historical and current squared gradients to construct a targeted first-order optimization algorithm for the adaptive learning rate. This algorithm not only overcomes the convergence problems encountered in most current optimization algorithms but also has a quick convergence. It outperforms the state-of-the-art algorithms on various real-world datasets, i.e., the forecast root-mean-square error performance is improved by about 20% on average than that of Adam and AMSGrad algorithms in time series prediction tasks.

Keywords: optimization; convergence; optimization algorithm; adaptive optimization; power; algorithm based

Journal Title: IEEE Access
Year Published: 2019

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.