Spatiotemporal attention learning for video question answering (VideoQA) has always been a challenging task, where existing approaches treat the attention parts and the nonattention parts in isolation. In this work,… Click to show full abstract
Spatiotemporal attention learning for video question answering (VideoQA) has always been a challenging task, where existing approaches treat the attention parts and the nonattention parts in isolation. In this work, we propose to enforce the correlation between the attention parts and the nonattention parts as a distance constraint for discriminative spatiotemporal attention learning. Specifically, we first introduce a novel attention-guided erasing mechanism in the traditional spatiotemporal attention to obtain multiple aggregated attention features and nonattention features and then learn to separate the attention and the nonattention features with an appropriate distance. The distance constraint is enforced by a metric learning loss, without increasing the inference complexity. In this way, the model can learn to produce more discriminative spatiotemporal attention distribution on videos, thus enabling more accurate question answering. In order to incorporate the multiscale spatiotemporal information that is beneficial for video understanding, we additionally develop a pyramid variant on basis of the proposed approach. Comprehensive ablation experiments are conducted to validate the effectiveness of our approach, and state-of-the-art performance is achieved on several widely used datasets for VideoQA.
               
Click one of the above tabs to view related content.