LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Incremental PID Controller-Based Learning Rate Scheduler for Stochastic Gradient Descent.

Photo from wikipedia

As we all know, the learning rate plays a vital role in deep neural network (DNN) training. This study introduces an incremental proportional-integral-derivative (PID) controller widely used in automatic control… Click to show full abstract

As we all know, the learning rate plays a vital role in deep neural network (DNN) training. This study introduces an incremental proportional-integral-derivative (PID) controller widely used in automatic control as a learning rate scheduler for stochastic gradient descent (SGD). To automatically calculate the current learning rate, we utilize feedback control to determine the relationship between training losses and learning rates, named incremental PID learning rates, which include PID-Base and PID-Warmup. The new schedulers reduce the dependence on the initial learning rate and achieve higher accuracy. Compared with multistep learning rates (MSLR), cyclical learning rates (CLR), and SGD with warm restarts (SGDR), incremental PID learning rates based on feedback control obtain higher accuracy on CIFAR-10, CIFAR-100, and Tiny-ImageNet-200. We believe that our methods can improve the performance of SGD.

Keywords: incremental pid; pid controller; learning rate; learning rates; rate; rate scheduler

Journal Title: IEEE transactions on neural networks and learning systems
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.