LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Policy Reuse for Dialog Management Using Action-Relation Probability

Photo by ggfujyoj from unsplash

We study the problem of policy adaptation for reinforcement-learning-based dialog management. Policy adaptation is a commonly used technique to alleviate the problem of data sparsity when training a goal-oriented dialog… Click to show full abstract

We study the problem of policy adaptation for reinforcement-learning-based dialog management. Policy adaptation is a commonly used technique to alleviate the problem of data sparsity when training a goal-oriented dialog system for a new task (the target task) by using knowledge when learning policies in an existing task. The methods used by current works in dialog policy adaptation need much time and effort for adapting because they use reinforcement learning algorithms to train a new policy for the target task from scratch. In this paper, we show that a dialog policy can be learned without training by reinforcement learning in the target task. In contrast to existing works, our proposed method learns the relation in the form of probability distribution between the action sets of the source and the target tasks. Thus, we can immediately derive a policy for the target task, which significantly reduces the adaptation time. Our experiments show that the proposed method learns a new policy for the target task much more quickly. In addition, the learned policy achieves higher performance than policies created by fine-tuning when the amount of available data on the target task is limited.

Keywords: target task; policy; task; dialog management

Journal Title: IEEE Access
Year Published: 2020

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.