Abstract Transferring the grasping skills learned from simulated environments to the real world is favorable for many robotic applications, in which the collecting and labeling processes of real-world visual grasping… Click to show full abstract
Abstract Transferring the grasping skills learned from simulated environments to the real world is favorable for many robotic applications, in which the collecting and labeling processes of real-world visual grasping datasets are often expensive or even impractical. However, the models purely trained on simulated data are often difficult to generalize well to the unseen real world due to the domain gap between the training and testing data. In this paper, we propose a novel domain adversarial transfer network to narrow the domain gap for cross-domain and task-constrained grasp pose detection. Generative adversarial training is exploited to constrain the generator to produce simulation-like data for extracting the shared features with the joint distribution. We also propose to improve the backbone by extracting task-constrained grasp candidates and constructing the grasp candidate evaluator with a lightweight structure and an embedded recalibration technique. To validate the effectiveness and superiority of our proposed method, grasping performance evaluation and task-oriented human–robot interaction experiments were investigated. The experiment results indicate that the proposed method achieves state-of-the-art performance in these experimental settings. An average task-constrained grasping success rate of 83.3% without using any real-world labels for the task-oriented human–robot interaction experiment was achieved especially.
               
Click one of the above tabs to view related content.