Constrained reinforcement learning (CRL), also termed as safe reinforcement learning, is a promising technique enabling the deployment of RL agent in real-world systems. In this paper, we propose a successive… Click to show full abstract
Constrained reinforcement learning (CRL), also termed as safe reinforcement learning, is a promising technique enabling the deployment of RL agent in real-world systems. In this paper, we propose a successive convex approximation based off-policy optimization (SCAOPO) algorithm to solve the general CRL problem, which is formulated as a constrained Markov decision process (CMDP) in context of the average cost. The SCAOPO is based on solving a sequence of convex objective/feasibility optimization problems obtained by replacing the objective and constraint functions in the original problem with convex surrogate functions. The proposed SCAOPO enables reuse of experiences from previous updates, thereby significantly reducing implementation cost when deployed in real-world engineering systems that need to online learn the environment. In spite of the time-varying state distribution and the stochastic bias incurred by off-policy learning, the SCAOPO with a feasible initial point can still provably converge to a Karush-Kuhn-Tucker (KKT) point of the original problem almost surely.
               
Click one of the above tabs to view related content.