Quasi-Linear Parameter Varying (Q-LPV) systems are often obtained as convex combinations of LTI models and have been widely applied for the control of nonlinear systems. An attractive feature is that… Click to show full abstract
Quasi-Linear Parameter Varying (Q-LPV) systems are often obtained as convex combinations of LTI models and have been widely applied for the control of nonlinear systems. An attractive feature is that the model can be adapted online via state or input dependent scheduling parameters to reflect the nonlinear system dynamics while retaining the overall linear structure for design purposes. In the context of Model Predictive Control (MPC), it is desirable that the online optimization problem is a Quadratic Program (QP), which can be easily solved. However, an impediment occurs when using Q-LPV models with MPC: the variation of the scheduling parameters over the prediction horizon casts the online optimization problem into a Nonlinear Program (NLP), which is computationally difficult. Thus the benefits of using the Q-LPV model predictions in MPC are lost. A QP based sub-optimal MPC is obtained if the Q-LPV parameters are treated as constant or frozen over the prediction horizon and updated upon availability of a new measurement at each sampling instant. However, the stability of such a sub-optimal Q-LPV MPC is not clear. In this letter, we consider a class of Q-LPV systems along with an affine term (Q-LPV-A), represented as a polytope with vertices, which corresponds to a piecewise-affine model and examine stability of the quadratic, sub-optimal MPC when the Q-LPV scheduling parameters are maintained constant over the prediction horizon. We derive a design condition for obtaining a stabilizing MPC law for Q-LPV systems. These aspects are illustrated using the Van-de-Vusse benchmark example.
               
Click one of the above tabs to view related content.