LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

3D Hand Pose Estimation From Monocular RGB With Feature Interaction Module

Photo by i_am_nah from unsplash

3D hand pose estimation from a monocular RGB image is a highly challenging task due to self-occlusion, diverse appearances, and inherent depth ambiguities within monocular images. Most of the previous… Click to show full abstract

3D hand pose estimation from a monocular RGB image is a highly challenging task due to self-occlusion, diverse appearances, and inherent depth ambiguities within monocular images. Most of the previous methods first employ deep neural networks to fit 2D joint location maps, then combines them with implicit or explicit pose-aware features to directly regress 3D hand joints positions using their designed network structure. However, the skeleton positions and corresponding skeleton-aware content information located in the latent space are invariably ignored. These skeleton-aware contents effectively bridge the gap between hand joint and hand skeleton information by associating the relationship between different hand joints features and the hand skeleton positions distribution in 2D space. To address this issue, we propose a simple yet efficient deep neural network to directly recover reliable 3D hand pose from monocular RGB images with faster estimation process. Our purpose is the reduction of the model computational complexity while maintaining high precision performance. Therefore, we design a novel Feature Chat Block (FCB) to complete feature boosting, which enables the intuitively enhanced interaction between joint and skeleton features. First, this FCB module updates joint features effectively based on semantic graph convolutional neural network and multi-head self-attention mechanism. The GCN-based structure focuses on the physical hand joints included in a binary adjacency matrix and the self-attention part pays attention to hand joints located in a complementary matrix. Then, the FCB module employs query and key mechanisms respectively representing joint and skeleton features to further implement feature interaction. After a set of FCB modules, our model updates the fused features in a coarse-to-fine manner and finally outputs a predicted 3D hand pose. We conducted a comprehensive set of ablation experiments on the InterHand2.6M dataset to validate the effectiveness and significance of the proposed method. Additionally, experimental results on Rendered Hand Dataset, Stereo Hand Datasets, First-Person Hand Action Dataset and FreiHAND Dataset show our model surpasses the state-of-the-art methods with faster inference speed.

Keywords: hand; monocular rgb; skeleton; estimation; hand pose

Journal Title: IEEE Transactions on Circuits and Systems for Video Technology
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.