This article addresses the challenge of improving locomotion mode recognition (LMR) for lower limb prosthetic users (LLPUs) by developing more generalizable machine learning (ML) models. Current models are limited to… Click to show full abstract
This article addresses the challenge of improving locomotion mode recognition (LMR) for lower limb prosthetic users (LLPUs) by developing more generalizable machine learning (ML) models. Current models are limited to subject-specific models mostly as subject-independent models are hindered by the high variability within the LLPU population and the limited availability of LLPU data. This article investigates leveraging non-disabled (ND) datasets to enhance model generalizability by first identifying more appropriate sensor locations. Different methods are tested that use the ND and LLPU datasets in different ways for feature selection and model training to optimize the performance of subject-independent ML models. It is shown that using vertical sensor combination on the intact side of LLPUs, feature selection with only LLPU and then training with both the datasets combined, can greatly enhance LMR accuracy, achieving a 91.8% accuracy with a linear discriminant analysis (LDA) model. This approach aims to reduce the need for extensive training sessions for new users while maintaining high accuracy.
               
Click one of the above tabs to view related content.