Locomotion mode recognition has been shown to substantially contribute to the precise control of robotic lower-limb prostheses under different walking conditions. In this study, we proposed a temporal convolutional capsule… Click to show full abstract
Locomotion mode recognition has been shown to substantially contribute to the precise control of robotic lower-limb prostheses under different walking conditions. In this study, we proposed a temporal convolutional capsule network (TCCN) which integrates the spatial-temporal-based, dilation-convolution-based, dyna- mic routing and vector-based features for recognizing locomotion mode recognition with small data rather than big-data-based neural networks for robotic prostheses. TCCN proposed in this study has four characteristics, which extracts the (1) spatial-temporal information in the data and then makes (2) dilated convolution to deal with small data, and uses (3) dynamic routing, which produces some similarities to the human brain to process the data as a (4) vector, which is different from other scalar-based networks, such as convolutional neural network (CNN). By comparison with a traditional machine learning, e.g., support vector machine(SVM) and big-data-driven neural networks, e.g., CNN, recurrent neural network(RNN), temporal convolutional network(TCN) and capsule network(CN). The accuracy of TCCN is 4.1% higher than CNN under 5-fold cross-validation of three-locomotion-mode and 5.2% higher under the 5-fold cross-validation of five-locomotion modes. The main confusion we found appears in the transition state. The results indicate that TCCN may handle small data balancing global and local information which is closer to the way how the human brain works, and the capsule layer allows for better processing vector information and retains not only magnitude information, but also direction information.
               
Click one of the above tabs to view related content.