Video summarization is the technique to condense large-scale videos into summaries composed of key-frames or key-shots so that the viewers can browse the video content efficiently. Recently, supervised approaches have… Click to show full abstract
Video summarization is the technique to condense large-scale videos into summaries composed of key-frames or key-shots so that the viewers can browse the video content efficiently. Recently, supervised approaches have achieved great success by taking advantages of recurrent neural networks (RNNs). Most of them focus on generating summaries by maximizing the overlap between the generated summary and the ground truth. However, they neglect the most critical principle, i.e., whether the viewer can infer the original video content from the summary. As a result, existing approaches cannot preserve the summary quality well and usually demand large amounts of training data to reduce overfitting. In our view, video summarization has two tasks, i.e., generating summaries from videos and inferring the original content from summaries. Motivated by this, we propose a dual learning framework by integrating the summary generation (primal task) and video reconstruction (dual task) together, which targets to reward the summary generator under the assistance of the video reconstructor. Moreover, to provide more guidance to the summary generator, two property models are developed to measure the representativeness and diversity of the generated summary. Practically, experiments on four popular data sets (SumMe, TVsum, OVP, and YouTube) have demonstrated that our approach, with compact RNNs as the summary generator, using less training data, and even in the unsupervised setting, can get comparable performance with those supervised ones adopting more complex summary generators and trained on more annotated data.
               
Click one of the above tabs to view related content.