Continuous-time optimization is now an active research field and many valuable results have been proposed. As is known, a fast convergence speed is always desirable for an optimization algorithm and… Click to show full abstract
Continuous-time optimization is now an active research field and many valuable results have been proposed. As is known, a fast convergence speed is always desirable for an optimization algorithm and thus the fixed-time convergence in continuous-time optimization is considered in this letter. Different from existing fixed-time schemes, a novel fixed-time gradient method with a fractional adaptive gain is proposed, whose convergence time is determined by the first positive zero of a Mittag-Leffler function and independent of initial conditions. Furthermore, to avoid the possible singularity, two non-singular fixed-time gradient methods are proposed. Finally, all the results are extended to the second-order algorithm, whose convergence time is proven to be independent of both initial conditions and the shape of the target function.
               
Click one of the above tabs to view related content.