High-precision robotic grasping is necessary for extensive grasping applications in the future. Most previous grasp detection methods fail to pay enough attention to learn grasp-related features and the detection accuracy… Click to show full abstract
High-precision robotic grasping is necessary for extensive grasping applications in the future. Most previous grasp detection methods fail to pay enough attention to learn grasp-related features and the detection accuracy is limited. In this letter, a novel attention-augmented grasp detection network (AAGDN) is presented to generate accurate grasp poses for unknown objects. The proposed AAGDN has three elaborate designs making it achieve higher accuracy than existing methods. First, we construct a coordinate attention residual module to extract positional information and improve the spatial sensitivity of features. Then, we propose an effective feature fusion module to bridge the resolution and semantic gaps of different-level features and obtain efficient feature representations. Lastly, a feature augmentation pyramid module is developed to enhance grasp-related features as needed and reduce the loss of information. Extensive experiments on three public datasets and various real-world scenes prove that the proposed AAGDN achieves better performance than current methods. Our model obtains the state-of-the-art 99.3% and 96.2% grasp detection accuracy on the Cornell and Jacquard dataset, respectively. Moreover, in physical grasping experiments, the AAGDN attains the 94.6% success rate for unseen objects in cluttered scenes, which further demonstrates the accuracy and robustness of our method to grasp novel objects.
               
Click one of the above tabs to view related content.