Wearable sensing and computer vision could move biomechanics from specialized laboratories to natural environments, but better algorithms are needed to extract meaningful outcomes from these emerging modalities. In this article,… Click to show full abstract
Wearable sensing and computer vision could move biomechanics from specialized laboratories to natural environments, but better algorithms are needed to extract meaningful outcomes from these emerging modalities. In this article, we present new models for estimating biomechanical outcomes—the knee adduction moment (KAM) and knee flexion moment (KFM)—from fusion of smartphone cameras and wearable inertial measurement units (IMUs) among young healthy nonobese males. A deep learning model was developed to extract features, fuse multimodal data, and estimate KAM and KFM. Walking data from 17 subjects were recorded with eight IMUs and two smartphone cameras. The model that used IMU-camera fusion was significantly more accurate than those using IMUs or cameras alone. The root-mean-square errors of the fusion model were 0.49
               
Click one of the above tabs to view related content.