LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Fast Stochastic MPC Implementation via Policy Learning

Photo by hajjidirir from unsplash

Stochastic Model Predictive Control (MPC) gained popularity thanks to its capability of overcoming the conservativeness of robust approaches, at the expense of a higher computational demand. This represents a critical… Click to show full abstract

Stochastic Model Predictive Control (MPC) gained popularity thanks to its capability of overcoming the conservativeness of robust approaches, at the expense of a higher computational demand. This represents a critical issue especially for sampling-based methods. In this letter we propose a policy learning MPC approach, which aims at reducing the cost of solving stochastic optimization problems. The presented scheme relies upon the use of neural networks for identifying a mapping between the current state of the system and the probabilistic constraints. This allows to reduce the sample complexity to be less than or equal to the dimension of the decision variable, significantly scaling down the computational burden of stochastic MPC approaches, while preserving the same probabilistic guarantees. The efficacy of the proposed policy-learning MPC is proved by means of a numerical example.

Keywords: stochastic mpc; policy learning; mpc implementation; mpc; fast stochastic

Journal Title: IEEE Control Systems Letters
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.