Physical interaction requires robots to accurately follow kinematic trajectories while modulating the interaction forces to accomplish tasks and to be safe to the environment. However, current approaches rely on accurate… Click to show full abstract
Physical interaction requires robots to accurately follow kinematic trajectories while modulating the interaction forces to accomplish tasks and to be safe to the environment. However, current approaches rely on accurate physical models or iterative learning approaches. We present a versatile approach for physical interaction tasks, based on Movement Primitives (MPs) that can learn physical interaction tasks solely by demonstrations, without explicitly modeling the robot or the environment. We base our approach on the Probabilistic Movement Primitives (ProMPs), which utilizes the variance of the demonstrations to provide better generalization of the encoded skill, combine skills, and derive a controller that follows exactly the encoded trajectory distribution. However, the ProMP controller requires the system dynamics to be known. We present a reformulation of the ProMPs that allows accurate reproduction of the skill without modeling the system dynamics and, further, we extent our approach to incorporate external sensors, as for example, force/torque sensors. Our approach learns physical interaction tasks solely from demonstrations and online adapts the movement to force–torque sensor input. We derive a variable-stiffness controller in closed form that reproduces the trajectory distribution and the interaction forces present in the demonstrations. We evaluate our approach in simulated and real-robot tasks.
               
Click one of the above tabs to view related content.