LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

MVCLN: Multi-View Convolutional LSTM Network for Cross-Media 3D Shape Recognition

Cross-media 3D model recognition is an important and challenging task in computer vision, which can be utilized in many applications such as landmark detection, image set classification, etc. In recent… Click to show full abstract

Cross-media 3D model recognition is an important and challenging task in computer vision, which can be utilized in many applications such as landmark detection, image set classification, etc. In recent years, with the development of deep learning, many approaches have been proposed to handle the 3D model recognition problem. However, all of these methods focus on the structure information representation and the multi-view information fusion, and ignore the spatial and temporal information. So that it is not suitable for the cross-media 3D model recognition. In this paper, we utilize the sequence views to represent each 3D model and propose a novel Multi-view Convolutional LSTM Network (MVCLN), which utilizes the LSTM structure to extract temporal information and applies the convolutional operation to extract spatial information. More especially, the spatial and temporal information both are considered during the training process, which can effectively utilize the differences between the view’s spatial information to improve the final performance. Meanwhile, we also introduce the classic attention model to define the weight of each view, which can reduce the redundant information of view’s spatial information in the information fusion step. We evaluate the proposed method on the ModelNet40 for 3D model classification and retrieval task. We also construct a dataset utilizing the overlap categories of MV-RED, ShapenetCore and ModelNet to demonstrate the effectiveness of our approach for the cross-media 3D model recognition. Experimental results and comparisons with the state-of-the-art methods demonstrate that our framework can achieve superior performance.

Keywords: cross media; model; multi view; recognition; information

Journal Title: IEEE Access
Year Published: 2020

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.