Caging grasps provide a way to manipulate an object without full immobilization and enable dealing with the pose uncertainties of the object. Most previous works have constructed caging sets by… Click to show full abstract
Caging grasps provide a way to manipulate an object without full immobilization and enable dealing with the pose uncertainties of the object. Most previous works have constructed caging sets by using the geometric models of the object. This work aims to present a learning-based method for caging a novel object only with its image. A caging set is first defined using the constrained region, and a mapping from the image feature to the caging set is then constructed with kernel regression function. Avoiding the collection of large number of samples, a multi-task learning method is developed to build the regression function, where several different caging tasks are trained with a joint model. In order to transfer the caging experience to a new caging task rapidly, shape similarity for caging knowledge transfer is utilized. Thus, given only the shape context for a novel object, the learner is able to accurately predict the caging set through zero-shot learning. The proposed method can be applied to the caging of a target object in a complex real-world environment, for which the user only needs to know the shape feature of the object, without the need for the geometric model. Several experiments prove the validity of our method. (C) 2019 Elsevier B.V. All rights reserved.
               
Click one of the above tabs to view related content.