Stochastic Model Predictive Control (MPC) gained popularity thanks to its capability of overcoming the conservativeness of robust approaches, at the expense of a higher computational demand. This represents a critical… Click to show full abstract
Stochastic Model Predictive Control (MPC) gained popularity thanks to its capability of overcoming the conservativeness of robust approaches, at the expense of a higher computational demand. This represents a critical issue especially for sampling-based methods. In this letter we propose a policy learning MPC approach, which aims at reducing the cost of solving stochastic optimization problems. The presented scheme relies upon the use of neural networks for identifying a mapping between the current state of the system and the probabilistic constraints. This allows to reduce the sample complexity to be less than or equal to the dimension of the decision variable, significantly scaling down the computational burden of stochastic MPC approaches, while preserving the same probabilistic guarantees. The efficacy of the proposed policy-learning MPC is proved by means of a numerical example.
               
Click one of the above tabs to view related content.