LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

A Novel Emotion-Aware Method Based on the Fusion of Textual Description of Speech, Body Movements, and Facial Expressions

Photo from wikipedia

Emotion computing is a necessary part of advanced human–computer interaction. An appropriate description of a character’s facial expressions, body languages, and speaking styles in novels always enables readers to infer… Click to show full abstract

Emotion computing is a necessary part of advanced human–computer interaction. An appropriate description of a character’s facial expressions, body languages, and speaking styles in novels always enables readers to infer the character’s emotions. Moreover, multimodal information is complementary and integrated. Fusing the information from multiple modes into a textual modal can get better fusion results and overcome the bias of understanding the unimodal information. Inspired by these facts, we develop a novel emotion-aware method by the fusion of textual description of speech, body movements, and facial expression, which reduces the dimensionality of speech, body movements, and facial expressions by unifying three types of information into a unified component. Specifically, to fuse multimodel features for emotion recognition, we propose a two-stage neural network. First, bidirectional long short-term memory-conditional random fields (Bi-LSTM-CRF) and back-propagation neural network (BPNN) are used to analyze the extracted vocal and visual features of facial expressions, body movements, and speeches, which aims to obtain textual descriptions of different features. Second, the textual descriptions of the features are fused through a neural network with a self-organization map (SOM) layer and are used to compensate layers that are trained by web-based corpus. The advantages of this method are to utilize depth information to track facial and bodily movement, and employ an explainable textual intermediate representation to fuse the features. We experimentally tested the emotion-aware system in real-world applications, and the results indicate that our system can quickly and steadily recognize human emotions. Compared with other unimodal and multimodal-fusion algorithms, our method is more precise, which can improve the accuracy by up to 30% compared with the unimodal method.

Keywords: fusion; facial expressions; emotion aware; body movements; body

Journal Title: IEEE Transactions on Instrumentation and Measurement
Year Published: 2022

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.