Robotic manipulators are playing an increasing role in a wide range of industries. However, their application to assembly tasks is hampered by the need for precise control over the environment… Click to show full abstract
Robotic manipulators are playing an increasing role in a wide range of industries. However, their application to assembly tasks is hampered by the need for precise control over the environment and for task-specific coding. Cartesian impedance control is a well-established method for interacting with the environment and handling uncertainties. With the advance of Reinforcement Learning (RL) it has been suggested to learn the impedance matrices. However, most of the current work is limited to learning diagonal impedance matrices in addition to the trajectory itself. We argue that asymmetric impedance matrices enhance the ability to properly correct reference trajectories generated by a baseline planner, alleviating the need for learning the trajectory. Moreover, a task-specific set of asymmetric impedance matrices can be sufficient for simple tasks, alleviating the need for learning variable impedance control. We learn impedance policies for small (few mm) peg-in-hole using model-free RL, and investigate the advantage of using asymmetric impedance matrices and their space-invariance. Finally, we demonstrate zero-shot policy transfer from the simulation to a real robot, and generalization to new real-world environments, with larger parts and semi-flexible pegs.
               
Click one of the above tabs to view related content.