Objective. This work proposes a method for two calibration schemes based on sensory feedback to extract reliable motor imagery (MI) features, and provide classification outputs more correlated to the user’s… Click to show full abstract
Objective. This work proposes a method for two calibration schemes based on sensory feedback to extract reliable motor imagery (MI) features, and provide classification outputs more correlated to the user’s intention. Method. After filtering the raw electroencephalogram (EEG), a two-step method for spatial feature extraction by using the Riemannian covariance matrices (RCM) method and common spatial patterns is proposed here. It uses EEG data from trials providing feedback, in an intermediate step composed of both kth nearest neighbors and probability analyses, to find periods of time in which the user probably performed well the MI task without feedback. These periods are then used to extract features with better separability, and train a classifier for MI recognition. For evaluation, an in-house dataset with eight healthy volunteers and two post-stroke patients that performed lower-limb MI, and consequently received passive movements as feedback was used. Other popular public EEG datasets (such as BCI Competition IV dataset IIb, among others) from healthy subjects that executed upper-and lower-limbs MI tasks under continuous visual sensory feedback were further used. Results. The proposed system based on the Riemannian geometry method in two-steps (RCM–RCM) outperformed significantly baseline methods, reaching average accuracy up to 82.29%. These findings show that EEG data on periods providing passive movement can be used to contribute greatly during MI feature extraction. Significance. Unconscious brain responses elicited over the sensorimotor areas may be avoided or greatly reduced by applying our approach in MI-based brain–computer interfaces (BCIs). Therefore, BCI’s outputs more correlated to the user’s intention can be obtained.
               
Click one of the above tabs to view related content.