LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Double Sparse Deep Reinforcement Learning via Multilayer Sparse Coding and Nonconvex Regularized Pruning

Deep reinforcement learning (DRL), which highly depends on the data representation, has shown its potential in many practical decision-making problems. However, the process of acquiring representations in DRL is easily… Click to show full abstract

Deep reinforcement learning (DRL), which highly depends on the data representation, has shown its potential in many practical decision-making problems. However, the process of acquiring representations in DRL is easily affected by interference from models, and moreover leaves unnecessary parameters, leading to control performance reduction. In this article, we propose a double sparse DRL via multilayer sparse coding and nonconvex regularized pruning. To alleviate interference in DRL, we propose a multilayer sparse-coding-structural network to obtain deep sparse representation for control in reinforcement learning. Furthermore, we employ a nonconvex log regularizer to promote strong sparsity, efficiently removing the unnecessary weights with a regularizer-based pruning scheme. Hence, a double sparse DRL algorithm is developed, which can not only learn deep sparse representation to reduce the interference but also remove redundant weights while keeping the robust performance. The experimental results in five benchmark environments of the deep $q$ network (DQN) architecture demonstrate that the proposed method with deep sparse representations from the multilayer sparse-coding structure can outperform existing sparse-coding-based DRL in control, for example, completing Mountain Car with 140.81 steps, achieving near 10% reward increase from the single-layer sparse-coding DRL algorithm, and obtaining 286.08 scores in Catcher, which are over two times the rewards of the other algorithms. Moreover, the proposed algorithm can reduce over 80% parameters while keeping performance improvements from deep sparse representations.

Keywords: double sparse; reinforcement learning; multilayer sparse; sparse coding; sparse

Journal Title: IEEE Transactions on Cybernetics
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.