LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles.
Sign Up to like articles & get recommendations!
Actor-Aware Alignment Network for Action Recognition
Action recognition has attracted growing interest recently. It suffers from the problem that complex and diverse environments may disturb the extraction of action features. Existing methods propose to explore the… Click to show full abstract
Action recognition has attracted growing interest recently. It suffers from the problem that complex and diverse environments may disturb the extraction of action features. Existing methods propose to explore the temporal associations to alleviate the issue. However, they cannot handle long-range frames, and the rigid techniques are powerless against the differences caused by the deformation of the actors. To this end, we propose the Actor-Aware Alignment Network (A$^{3}$Net), which helps locate the action region. Specifically, through the intra-snippet correction, we afford the local segment alignment frames. The inter-snippet is designed to rectify the results, avoiding the occlusion situation that may appear in the local snippet. In addition, we consider intra-alignment short-range adjustive frames and long-range context frames between different snippets, which allows our A$^{3}$Net network to achieve the effect of focusing on long-range frame information. Multiple Reasoning Attention (MRA) modules are introduced to integrate features along the temporal dimension to keep the video spatio-temporal consistent. Extensive experiments conducted on three widely-used public benchmarks, UCF101, HMDB51, and InfAR, indicate that the excellence of our approach over other state-of-the-art models in wild scenarios.
Share on Social Media:
  
        
        
        
Sign Up to like & get recommendations! 1
Related content
More Information
            
News
            
Social Media
            
Video
            
Recommended
               
Click one of the above tabs to view related content.