LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Intrinsically Motivated Hierarchical Policy Learning in Multiobjective Markov Decision Processes

Photo by garri from unsplash

The multiobjective Markov decision processes (MOMDPs) are sequential decision-making problems that involve multiple conflicting reward functions that cannot be optimized simultaneously without a compromise. This type of problem cannot be… Click to show full abstract

The multiobjective Markov decision processes (MOMDPs) are sequential decision-making problems that involve multiple conflicting reward functions that cannot be optimized simultaneously without a compromise. This type of problem cannot be solved by a single optimal policy as in the conventional case. Alternatively, multiobjective reinforcement learning (RL) methods evolve a coverage set of optimal policies that can satisfy all possible preferences in solving the problem. However, many of these methods cannot generalize their coverage sets to work in the nonstationary environments. In these environments, the parameters of the state transition and reward distribution vary over time. This limitation results in significant performance degradation for the evolved policy sets. In order to overcome this limitation, there is a need to learn a generic skillset that can bootstrap the evolution of the policy coverage set for each shift in the environment dynamics, therefore, it can facilitate a continuous learning process. In this article, intrinsically motivated RL (IMRL) has been successfully deployed to evolve generic skillsets for learning hierarchical policy to solve the MOMDPs. We propose a novel dual-phase IMRL method to address this limitation. In the first phase, a generic set of skills is learned, while in the second phase, this set is used to bootstrap policy coverage sets for each shift in the environment dynamics. We show experimentally that the proposed method significantly outperforms the state-of-the-art multiobjective reinforcement methods on a dynamic robotics environment.

Keywords: markov decision; intrinsically motivated; multiobjective markov; policy; decision; decision processes

Journal Title: IEEE Transactions on Cognitive and Developmental Systems
Year Published: 2021

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.