LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Meta-Learning Paradigm and CosAttn for Streamer Action Recognition in Live Video

Photo by guillediaz from unsplash

As an emerging field of network content production, live video has been in the vacuum zone of cyberspace governance for a long time. Streamer action recognition is conducive to the… Click to show full abstract

As an emerging field of network content production, live video has been in the vacuum zone of cyberspace governance for a long time. Streamer action recognition is conducive to the supervision of live video content. In view of the diversity and imbalance of streamer actions, it is attractive to introduce few-shot learning to realize streamer action recognition. Therefore, a meta-learning paradigm and CosAttn for streamer action recognition method in live video is proposed, including: (1) the training set samples similar to the streamer action to be recognized are pretrained to improve the backbone network; (2) video-level features are extracted by R(2+1)D-18 backbone and global average pooling in the meta-learning paradigm; (3) the streamer action is recognized by calculating cosine similarity after sending the video-level features to CosAttn to generate a streamer action category prototype. Experimental results on several real-world action recognition datasets demonstrate the effectiveness of our method.

Keywords: live video; action; streamer action; action recognition

Journal Title: IEEE Signal Processing Letters
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.