Although robots have been widely applied in various fields, allowing a robot to perform a wide range of tasks like humans is a significant challenge. One promising method is meta-learning,… Click to show full abstract
Although robots have been widely applied in various fields, allowing a robot to perform a wide range of tasks like humans is a significant challenge. One promising method is meta-learning, which enables robots to learn from demonstrations with the concept of “learning to learn.” However, most meta-learning methods only focus on teaching robots to learn from a single demonstration domain, i.e., demonstrations recording videos of a human or robot performing tasks (human or robot demonstrations). Given that humans can acquire and merge knowledge from various related domains, this letter proposes a novel yet efficient Random Domain-Adaptive Meta-Learning (RDAML) framework to teach the robot to learn from multiple demonstration domains (e.g., human demonstrations + robot demonstrations) with different random sampling parameters. Once training is complete, the trained model can adapt to new environments given a corresponding visual demonstration. Extensive experimental results show that the model trained by our proposed RDAML algorithm obtains better generalization capability. We have demonstrated the effectiveness of our RDAML on real-world placing experiments using a UR5 robot arm, which significantly outperforms current state-of-the-art methods using either human demonstrations or robot demonstrations to teach the robot during testing.
               
Click one of the above tabs to view related content.