The use of electroencephalography to recognize human emotions is a key technology for advancing human–computer interactions. This study proposes an improved deep convolutional neural network model for emotion classification using… Click to show full abstract
The use of electroencephalography to recognize human emotions is a key technology for advancing human–computer interactions. This study proposes an improved deep convolutional neural network model for emotion classification using a non-end-to-end training method that combines bottom-, middle-, and top-layer convolution features. Four sets of experiments using 4500 samples were conducted to verify model performance. Simultaneously, feature visualization technology was used to extract the three-layer features obtained by the model, and a scatterplot analysis was performed. The proposed model achieved a very high accuracy of 93.7%, and the extracted features exhibited the best separability among the tested models. We found that adding redundant layers did not improve model performance, and removing the data of specific channels did not significantly reduce the classification effect of the model. These results indicate that the proposed model allows for emotion recognition with a higher accuracy and speed than the previously reported models. We believe that our approach can be implemented in various applications that require the quick and accurate identification of human emotions.
               
Click one of the above tabs to view related content.