LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

VSRN: Visual-Semantic Relation Network for Video Visual Relation Inference

Video visual relation inference refers to the task of automatically detecting the relation triplets between the observed objects in videos with the form of ${ < subject, predicate, object>}$ ,… Click to show full abstract

Video visual relation inference refers to the task of automatically detecting the relation triplets between the observed objects in videos with the form of ${ < subject, predicate, object>}$ , which requires correctly labeling each detected object and their interaction predicates. Despite the recent advances in image visual relation detection using deep learning techniques, relation inference in videos remains a challenging topic. On one hand, since the introduction of temporal information, it needs to model the rich spatio-temporal visual information for objects and videos. On the other hand, wild videos are often annotated with incomplete relation triplet tags and a few of them are semantically overlapped. However, previous methods adopt hand-crafted visual features extracted from the trajectories, describing local appearance characteristics of isolated objects. And they treat the problem as a multi-class classification task, which makes the relation tags mutually exclusive. To address the above issues, we propose a novel model, termed Visual-Semantic Relation Network (VSRN). In this network, we leverage three-dimensional convolution kernel to capture spatio-temporal features, and encode global visual features in videos through pooling operation on each time slice. Moreover, the semantic collocations between objects are also incorporated so as to obtain comprehensive representations of the relationships. For relation classification, we treat the problem as a multi-label classification task and regard each tag to be independent to predict various relationships. Additionally, we modify commonly used evaluation metric, video-wise recall, to a pair-wise metric (Roop) for testing the performance of models in predicting multiple relationships for the object pairs, Extensive experimental results on two large-scale datasets demonstrate the effectiveness of our proposed model which significantly outperforms the previous works.

Keywords: network; relation inference; visual relation; video; relation

Journal Title: IEEE Transactions on Circuits and Systems for Video Technology
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.