LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

A Framework of Improving Human Demonstration Efficiency for Goal-Directed Robot Skill Learning

Photo by rocknrollmonkey from unsplash

Robot learning from humans allows robots to automatically adjust to stochastic and dynamic environments by learning from nontechnical end user’s demonstrations, which is best known as robot programming by demonstration,… Click to show full abstract

Robot learning from humans allows robots to automatically adjust to stochastic and dynamic environments by learning from nontechnical end user’s demonstrations, which is best known as robot programming by demonstration, robot learning from demonstration, apprenticeship learning, and imitation learning. Although most of those methods are probabilistic, and their performances intensively depend on the demonstrated data, measuring and evaluating human demonstrations are rarely investigated. A poorly demonstrated data set with useless prior knowledge or redundant demonstrations increases the complexity and time cost of robot learning. To solve these problems, a goal-directed robot skill learning framework named GPm-MOGP is presented. It 1) decides when and where to add a new demonstration by calculating the trajectory uncertainty; 2) determines which demonstration is useless or redundant by Kullback–Leibler (KL) divergence; 3) implements robot skill learning with a minimum number of demonstrations using a multioutput Gaussian process; and 4) learns orientation uncertainty and representation by combining logarithmic and exponential maps. The proposed framework significantly reduces the demonstrated effort of nontechnical end users who lack an understanding of how and what the robot learns during the demonstrating process. To evaluate the proposed framework, a pick-and-place experiment was designed with five unseen goals to verify the effectiveness of our methods. This experiment is well illustrated with two phases: 1) demonstration efficiency and 2) skill representation and reproduction. The results indicate an improvement of 60% in human demonstration efficiency, compared to common learning from demonstrations (LfD) applications that require at least ten demonstrations, and the robot average success rate of pick-and-place task reaches 85%.

Keywords: demonstration; demonstration efficiency; robot; robot skill; skill learning

Journal Title: IEEE Transactions on Cognitive and Developmental Systems
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.