Although human-multi-robot systems have received increased attention in recent years, current implementations rely on structured environments and utilize specialized, research-grade hardware to operate. This letter presents approaches that leverage the… Click to show full abstract
Although human-multi-robot systems have received increased attention in recent years, current implementations rely on structured environments and utilize specialized, research-grade hardware to operate. This letter presents approaches that leverage the visual and inertial sensing of mobile devices to address the estimation and control challenges of multi-robot systems that function in shared spaces with human operators such that both the mobile device camera and robots can move freely in the environment. It is shown that a subset of robots in the system can be used to maintain a reference frame that facilitates tracking and control of the remaining robots to perform tasks, such as object retrieval, using an operator's mobile device as the only sensing and computational platform in the system. To evaluate the performance of the proposed approaches, experiments are conducted in which a system of mobile robots is commanded to retrieve objects in an environment. Results show that, compared to using the visual data alone, integrating both the visual and inertial data from mobile devices yields improvements in performance, flexibility, and computational efficiency in implementing human-multi-robot systems.
               
Click one of the above tabs to view related content.