LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Learning on the Job: Long-Term Behavioural Adaptation in Human-Robot Interactions

Photo by robowunderkind from unsplash

In this work, we propose a framework for allowing autonomous robots deployed for extended periods of time in public spaces to adapt their own behaviour online from user interactions. The… Click to show full abstract

In this work, we propose a framework for allowing autonomous robots deployed for extended periods of time in public spaces to adapt their own behaviour online from user interactions. The robot behaviour planning is embedded in a Reinforcement Learning (RL) framework, where the objective is maximising the level of overall user engagement during the interactions. We use the Upper-Confidence-Bound Value-Iteration (UCBVI) algorithm, which gives a helpful way of managing the exploration-exploitation trade-off for real-time interactions. An engagement model trained end-to-end generates the reward function in real-time during policy execution. We test this approach in a public museum in Lincoln (U.K.), where the robot is deployed as a tour guide for the visitors. Results show that after a couple of months of exploration, the robot policy learned to maintain the engagement of users for longer, with an increase of 22.8% over the initial static policy in the number of items visited during the tour and a 30% increase in the probability of completing the tour. This work is a promising step toward behavioural adaptation in long-term scenarios for robotics applications in social settings.

Keywords: long term; robotics; behavioural adaptation; robot

Journal Title: IEEE Robotics and Automation Letters
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.