LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Domain-adaptive emotion recognition based on horizontal vertical flow representation of EEG signals

Photo from wikipedia

With the development of cognitive science and brain science, brain-computer interface technology can use Electroencephalogram (EEG) signals to better represent the inner changes of emotions. In this paper, A video-induced… Click to show full abstract

With the development of cognitive science and brain science, brain-computer interface technology can use Electroencephalogram (EEG) signals to better represent the inner changes of emotions. In this paper, A video-induced emotional stimulation experimental paradigm was designed, and the EEG signals of 15 hearing-impaired subjects under three emotions (positive, neutral, and negative) were collected. Considering the flow diffusion properties of EEG signals, we used the diffusion effect based on horizontal representation and vertical representation forms to obtain the spatial domain features. After EEG preprocessing, the differential entropy feature (DE) in the frequency domain is extracted. The frequency domain features of 62 channels are delivered to two Bi-directional Long Short-Term Memory (BiLSTM) to obtain spatial domain features of horizontal and vertical representations respectively, and then two kinds of domain features are fused by the residual network. The attention mechanism is applied to effectively extract emotional representational information from the fused features. To solve the cross-subject problem of emotion recognition, the domain adaptation method is utilized, and a center alignment loss function is applied to increase the distance of inter-class and reduce the distance of intra-class. According to the experimental results, the average accuracies of 75.89% (subject- dependent) and 69.59% (cross-subject) are obtained. Moreover, the validation was also performed on the public dataset SEED, achieving average accuracies of 93.99% (subject-dependent) and 84.22% (cross-subject), respectively.

Keywords: based horizontal; horizontal vertical; domain; domain features; representation; eeg signals

Journal Title: IEEE Access
Year Published: 2023

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.