Brain–computer interfaces are emerging as an important research area and are intended to create an understanding between a computer and the human brain to ensure that robot–human interactions become more… Click to show full abstract
Brain–computer interfaces are emerging as an important research area and are intended to create an understanding between a computer and the human brain to ensure that robot–human interactions become more intuitive and user-friendly. However, encoding of brain information to derive the intended motion of the user in real time continues to present a problem with respect to the control of wearable robots with multiple degrees of freedom. In this study, a new approach to control several degrees of freedom in a wearable robot is proposed and its feasibility is studied by estimating the user’s motion intention in real time, in terms of the user’s intended tasks to perform, by using electroencephalography signals measured from the scalp of the user. A time-delayed feature matrix is introduced to provide inputs to neural network and support vector machine-based classifiers that harvest the dynamic nature of the electroencephalography signals for motion intention prediction. The experimental results indicate the effectiveness of the proposed methodology in the estimation of user motion intention, in terms of intended task to perform.
               
Click one of the above tabs to view related content.