Multimodal sensor fusion can improve the performance of human–machine interfaces (HMIs). However, increased sensing modalities and sensor count often cause excess redundancies, and when applying deep learning approaches, the recognition… Click to show full abstract
Multimodal sensor fusion can improve the performance of human–machine interfaces (HMIs). However, increased sensing modalities and sensor count often cause excess redundancies, and when applying deep learning approaches, the recognition system can become overly complex and difficult for humans to understand. In this article, we propose an explainable artificial intelligence (XAI) approach to reduce redundancies in inertial measurement units (IMUs) and electromyography (EMG) multimodal systems and optimize sensor disposition in prosthetic hand control. Four attribution algorithms and four quantitative evaluation algorithms were used on an open-source dataset of 17 hand gestures from 60 healthy subjects and 11 amputees to explore the working mechanism behind the multimodal system. Using an XAI approach, we reduced the total number of required sensors by 40% while maintaining the same level of accuracy. These results could enable optimized HMI system design with reduced sensor costs and manufacturing costs. The proposed approach lays the foundation for improving HMI systems by reducing complexity and revealing explainable information that is typically hidden within deep neural networks, thereby facilitating patients in the daily use of prosthetic hands and helping improve their quality of life.
               
Click one of the above tabs to view related content.