LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

A new approach to recognition of human emotions using brain signals and music stimuli

Photo from wikipedia

Abstract It is widely accepted that the music can create and evoke a wide variety of emotions in the listener. However, music is an audio signal consisting of a wide… Click to show full abstract

Abstract It is widely accepted that the music can create and evoke a wide variety of emotions in the listener. However, music is an audio signal consisting of a wide variety of complex components that vary according to time and frequency. Also, the feeling of music is subjective and may differ depending on the age, culture, profession and other reasons of the person, so everyone from a music may not feel the same feeling. It is not very easy to know immediately which emotions will be triggered by a piece of music in a given individual. Human emotion recognition using brain signals is an active research topic in many areas, and Electroencephalography (EEG) signal is widely used in emotion recognition. Many EEG-based emotion recognition methods in the literature use a lot of extracted features that lead to complexity. In this study, the problem of proper recognition of human emotions while listening to music are addressed. In line with this problem, an EEG-based emotion recognition model is developed, and a new emotion recognition method based on deep learning is proposed. Different types of music pieces are played to the participants and the electrical waves formed in the brain are used to recognize happy, sad, relax and angry mood states. Participants are asked to listen to music from different genres in a noiseless environment. For the classification of emotions, EEG signals are primarily taken from different channels and spectrograms of these resulting signals are extracted. Spectrograms are given as inputs to pre-trained AlexNet and VGG16 deep network models and the transfer learning process is implemented. The best classification result is obtained with VGG16. According to the results, it is observed that the presented method performed well.

Keywords: recognition; music; emotion recognition; using brain

Journal Title: Applied Acoustics
Year Published: 2021

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.