Motor imagery electroencephalography (MI-EEG), which is an important subfield of active brain–computer interface (BCI) systems, can be applied to help disabled people to consciously and directly control prosthesis or external… Click to show full abstract
Motor imagery electroencephalography (MI-EEG), which is an important subfield of active brain–computer interface (BCI) systems, can be applied to help disabled people to consciously and directly control prosthesis or external devices, aiding them in certain daily activities. However, the low signal-to-noise ratio and spatial resolution make MI-EEG decoding a challenging task. Recently, some deep neural approaches have shown good improvements over state-of-the-art BCI methods. In this study, an end-to-end scheme that includes a multi-layer convolution neural network is constructed for an accurate spatial representation of multi-channel grouped MI-EEG signals, which is employed to extract the useful information present in a multi-channel MI signal. Then the invariant spatial representations are captured from across-subjects training for enhancing the generalization capability through a stacked sparse autoencoder framework, which is inspired by representative deep learning models. Furthermore, a quantitative experimental analysis is conducted on our private dataset and on a public BCI competition dataset. The results show the effectiveness and significance of the proposed methodology.
               
Click one of the above tabs to view related content.