LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Fast and accurate decoding of finger movements from ECoG through Riemannian features and modern machine learning techniques

Photo from wikipedia

Objective. Accurate decoding of individual finger movements is crucial for advanced prosthetic control. In this work, we introduce the use of Riemannian-space features and temporal dynamics of electrocorticography (ECoG) signal… Click to show full abstract

Objective. Accurate decoding of individual finger movements is crucial for advanced prosthetic control. In this work, we introduce the use of Riemannian-space features and temporal dynamics of electrocorticography (ECoG) signal combined with modern machine learning (ML) tools to improve the motor decoding accuracy at the level of individual fingers. Approach. We selected a set of informative biomarkers that correlated with finger movements and evaluated the performance of state-of-the-art ML algorithms on the brain-computer interface (BCI) competition IV dataset (ECoG, three subjects) and a second ECoG dataset with a similar recording paradigm (Stanford, nine subjects). We further explored the temporal concatenation of features to effectively capture the history of ECoG signal, which led to a significant improvement over single-epoch decoding in both classification (p < 0.01) and regression tasks (p < 0.01). Main results. Using feature concatenation and gradient boosted trees (the top-performing model), we achieved a classification accuracy of 77.0% in detecting individual finger movements (six-class task, including rest state), improving over the state-of-the-art conditional random fields by 11.7% on the three BCI competition subjects. In continuous decoding of movement trajectory, our approach resulted in an average Pearson’s correlation coefficient (r) of 0.537 across subjects and fingers, outperforming both the BCI competition winner and the state-of-the-art approach reported on the same dataset (CNN + LSTM). Furthermore, our proposed method features a low time complexity, with only < 17.2 s required for training and < 50 ms for inference. This enables about 250× speed-up in training compared to previously reported deep learning method with state-of-the-art performance. Significance. The proposed techniques enable fast, reliable, and high-performance prosthetic control through minimally-invasive cortical signals.

Keywords: finger movements; state; finger; machine learning; accurate decoding; modern machine

Journal Title: Journal of Neural Engineering
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.