Predicting the user’s intended locomotion mode is critical for wearable robot control to assist the user’s seamless transitions when walking on changing terrains. Although machine vision has recently proven to… Click to show full abstract
Predicting the user’s intended locomotion mode is critical for wearable robot control to assist the user’s seamless transitions when walking on changing terrains. Although machine vision has recently proven to be a promising tool in identifying upcoming terrains in the travel path, existing approaches are limited to environment perception rather than human intent recognition that is essential for coordinated wearable robot operation. Hence, in this study, we aim to develop a novel system that fuses the human gaze (representing user intent) and machine vision (capturing environmental information) for accurate prediction of the user’s locomotion mode. The system possesses multimodal visual information and recognizes user’s locomotion intent in a complex scene, where multiple terrains are present. Additionally, based on the dynamic time warping algorithm, a fusion strategy was developed to align temporal predictions from individual modalities while producing flexible decisions on the timing of locomotion mode transition for wearable robot control. System performance was validated using experimental data collected from five participants, showing high accuracy (over 96% in average) of intent recognition and reliable decision-making on locomotion transition with adjustable lead time. The promising results demonstrate the potential of fusing human gaze and machine vision for locomotion intent recognition of lower limb wearable robots.
               
Click one of the above tabs to view related content.