LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Six-Dimensional Target Pose Estimation for Robot Autonomous Manipulation: Methodology and Verification

Photo from wikipedia

The autonomous and precise grasping operation of robots is considered challenging in situations where there are different objects with different shapes and postures. In this study, we proposed a method… Click to show full abstract

The autonomous and precise grasping operation of robots is considered challenging in situations where there are different objects with different shapes and postures. In this study, we proposed a method of 6-D target pose estimation for robot autonomous manipulation. The proposed method is based on: 1) a fully convolutional neural network for scene semantic segmentation and 2) fast global registration to achieve target pose estimation. To verify the validity of the proposed algorithm, we built a robot grasping operation system and used the point cloud model of the target object and its pose estimation results to generate the robot grasping posture control strategy. Experimental results showed that the proposed method can achieve a six-degree-of-freedom pose estimation for arbitrarily placed target objects and complete the autonomous grasping of the target. Comparative experiments demonstrated that the proposed target pose estimation method achieved a significant improvement in average accuracy and real-time performance compared with traditional methods.

Keywords: pose estimation; methodology; target pose; estimation robot; target

Journal Title: IEEE Transactions on Cognitive and Developmental Systems
Year Published: 2023

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.