In this paper, we determine the approximation ratio of a linear-saturated control policy of a typical robust-stabilization problem. We consider a system, whose state integrates the discrepancy between the unknown… Click to show full abstract
In this paper, we determine the approximation ratio of a linear-saturated control policy of a typical robust-stabilization problem. We consider a system, whose state integrates the discrepancy between the unknown but bounded disturbance and control. The control aims at keeping the state within a target set, whereas the disturbance aims at pushing the state outside of the target set by opposing the control action. The literature often solves this kind of problems via a linear-saturated control policy. We show how this policy is an approximation for the optimal control policy by reframing the problem in the context of quadratic zero-sum differential games. We prove that the considered approximation ratio is asymptotically bounded by 2, and it is upper bounded by 2 in the case of 1-dimensional system. In this last case, we also discuss how the approximation ratio may apparently change, when the system’s demand is subject to uncertainty. In conclusion, we compare the approximation ratio of the linear-saturated policy with the one of a family of control policies which generalize the bang–bang one.
               
Click one of the above tabs to view related content.