Mapping operator motions to a robot is a key problem in teleoperation. Due to differences between local and remote workspaces, such as object locations, it is particularly challenging to derive… Click to show full abstract
Mapping operator motions to a robot is a key problem in teleoperation. Due to differences between local and remote workspaces, such as object locations, it is particularly challenging to derive smooth motion mappings that fulfill different goals (e.g., picking objects with different poses on the two sides or passing through key points). Indeed, most state-of-the-art methods rely on mode switches, leading to a discontinuous, low-transparency experience. In this letter, we propose a unified formulation for position, orientation and velocity mappings based on the poses of objects of interest in the operator and robot workspaces. We apply it in the context of bilateral teleoperation. Two possible implementations to achieve the proposed mappings are studied: an iterative approach based on locally-weighted translations and rotations, and a neural network approach. Evaluations are conducted both in simulation and using two torque-controlled Franka Emika Panda robots. Our results show that, despite longer training time, the neural network approach provides faster mapping evaluations and lower interaction forces for the operator, which are crucial for continuous, real-time teleoperation.
               
Click one of the above tabs to view related content.