SUMMARY This paper presents a novel system for human–robot interaction in object-grasping applications. Consisting of an RGB-D camera, a projector and a robot manipulator, the proposed system provides intuitive information… Click to show full abstract
SUMMARY This paper presents a novel system for human–robot interaction in object-grasping applications. Consisting of an RGB-D camera, a projector and a robot manipulator, the proposed system provides intuitive information to the human by analyzing the scene, detecting graspable objects and directly projecting numbers or symbols in front of objects. Objects are detected using a visual attention model that incorporates color, shape and depth information. The positions and orientations of the projected numbers are based on the shapes, positions and orientations of the corresponding objects. Users select a grasping target by indicating the corresponding number. Projected arrows are then created on the fly to guide a robotic arm to grasp the selected object using visual servoing and deliver the object to the human user. Experimental results are presented to demonstrate how the system is used in robot grasping tasks.
               
Click one of the above tabs to view related content.