LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Successive Convex Approximation Based Off-Policy Optimization for Constrained Reinforcement Learning

Photo by hajjidirir from unsplash

Constrained reinforcement learning (CRL), also termed as safe reinforcement learning, is a promising technique enabling the deployment of RL agent in real-world systems. In this paper, we propose a successive… Click to show full abstract

Constrained reinforcement learning (CRL), also termed as safe reinforcement learning, is a promising technique enabling the deployment of RL agent in real-world systems. In this paper, we propose a successive convex approximation based off-policy optimization (SCAOPO) algorithm to solve the general CRL problem, which is formulated as a constrained Markov decision process (CMDP) in context of the average cost. The SCAOPO is based on solving a sequence of convex objective/feasibility optimization problems obtained by replacing the objective and constraint functions in the original problem with convex surrogate functions. The proposed SCAOPO enables reuse of experiences from previous updates, thereby significantly reducing implementation cost when deployed in real-world engineering systems that need to online learn the environment. In spite of the time-varying state distribution and the stochastic bias incurred by off-policy learning, the SCAOPO with a feasible initial point can still provably converge to a Karush-Kuhn-Tucker (KKT) point of the original problem almost surely.

Keywords: optimization; reinforcement learning; policy; constrained reinforcement; successive convex; convex

Journal Title: IEEE Transactions on Signal Processing
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.