Optimal control of discrete-time systems with a less common performance measure is investigated, in which the cost function to be minimized is the maximum, instead of the sum, of a… Click to show full abstract
Optimal control of discrete-time systems with a less common performance measure is investigated, in which the cost function to be minimized is the maximum, instead of the sum, of a cost per stage over the control time. Three control scenarios are studied under a finite-horizon, a discounted infinite-horizon, and an undiscounted infinite-horizon performance measure. For each case, the Bellman equation is derived by direct use of dynamic programming, and the necessary and sufficient conditions for an optimal control are established around this equation. A motivating example on optimal control of dc-dc buck power converters is presented.
               
Click one of the above tabs to view related content.