We address the challenging problem of weakly supervised temporal action localization from unconstrained web videos, where only the video-level action labels are available during training. Inspired by the adversarial erasing… Click to show full abstract
We address the challenging problem of weakly supervised temporal action localization from unconstrained web videos, where only the video-level action labels are available during training. Inspired by the adversarial erasing strategy in weakly supervised semantic segmentation, we propose a novel iterative-winners-out network. Specifically, we make two technical contributions: we propose an iterative training strategy, namely, winners-out, to select the most discriminative action instances in each training iteration and remove them in the next training iteration. This iterative process alleviates the “winner-takes-all” phenomenon that existing approaches tend to choose the video segments that strongly correspond to the video label but neglects other less discriminative video segments. With this strategy, our network is able to localize not only the most discriminative instances but also the less discriminative ones. To better select the target action instances in winners-out, we devise a class-discriminative localization technique. By employing the attention mechanism and the information learned from data, our technique is able to identify the most discriminative action instances effectively. The two key components are integrated into an end-to-end network to localize actions without using the frame-level annotations. Extensive experimental results demonstrate that our method outperforms the state-of-the-art weakly supervised approaches on ActivityNet1.3 and improves mAP from 16.9% to 20.5% on THUMOS14. Notably, even with weak video-level supervision, our method attains comparable accuracy to those employing frame-level supervisions.
               
Click one of the above tabs to view related content.