LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Breaking Winner-Takes-All: Iterative-Winners-Out Networks for Weakly Supervised Temporal Action Localization

Photo from wikipedia

We address the challenging problem of weakly supervised temporal action localization from unconstrained web videos, where only the video-level action labels are available during training. Inspired by the adversarial erasing… Click to show full abstract

We address the challenging problem of weakly supervised temporal action localization from unconstrained web videos, where only the video-level action labels are available during training. Inspired by the adversarial erasing strategy in weakly supervised semantic segmentation, we propose a novel iterative-winners-out network. Specifically, we make two technical contributions: we propose an iterative training strategy, namely, winners-out, to select the most discriminative action instances in each training iteration and remove them in the next training iteration. This iterative process alleviates the “winner-takes-all” phenomenon that existing approaches tend to choose the video segments that strongly correspond to the video label but neglects other less discriminative video segments. With this strategy, our network is able to localize not only the most discriminative instances but also the less discriminative ones. To better select the target action instances in winners-out, we devise a class-discriminative localization technique. By employing the attention mechanism and the information learned from data, our technique is able to identify the most discriminative action instances effectively. The two key components are integrated into an end-to-end network to localize actions without using the frame-level annotations. Extensive experimental results demonstrate that our method outperforms the state-of-the-art weakly supervised approaches on ActivityNet1.3 and improves mAP from 16.9% to 20.5% on THUMOS14. Notably, even with weak video-level supervision, our method attains comparable accuracy to those employing frame-level supervisions.

Keywords: supervised temporal; temporal action; action localization; action; weakly supervised

Journal Title: IEEE Transactions on Image Processing
Year Published: 2019

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.