LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Deep reinforcement learning for extractive document summarization

Photo by cytonn_photography from unsplash

Abstract We present a novel extractive document summarization approach based on a Deep Q-Network (DQN), which can model salience and redundancy of sentences in the Q-value approximation and learn a… Click to show full abstract

Abstract We present a novel extractive document summarization approach based on a Deep Q-Network (DQN), which can model salience and redundancy of sentences in the Q-value approximation and learn a policy that maximize the Rouge score with respect to gold summaries. We design two hierarchical network architectures to not only generate informative features from the document to represent the states of DQN, but also create a list of potential actions from sentences in the document for the DQN. At training time, our model is directly trained on reference summaries generated by human, eliminating the need for sentence-level extractive labels. For testing, we evaluate this model on the CNN/Daily corpus, the DUC 2002 dataset and the DUC 2004 dataset using Rouge metric. Our experiments show that our approach achieves performance which is better than or comparable to state-of-the-art models on these corpora without any access to linguistic annotation. This is the first time DQN has been applied to extractive summarization tasks.

Keywords: document summarization; extractive document; summarization; reinforcement learning; deep reinforcement

Journal Title: Neurocomputing
Year Published: 2018

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.