LAUSR.org creates dashboard-style pages of related content for over 1.5 million academic articles. Sign Up to like articles & get recommendations!

Emotion recognition from facial images with simultaneous occlusion, pose and illumination variations using meta-learning

Photo from wikipedia

Abstract Automatic facial emotion recognition in real-world situations like partial occlusions, varying head poses and illumination conditions are challenging to the machine learning community. The main reason is the lack… Click to show full abstract

Abstract Automatic facial emotion recognition in real-world situations like partial occlusions, varying head poses and illumination conditions are challenging to the machine learning community. The main reason is the lack of sufficient samples with the aforementioned conditions in the baseline datasets which throws difficulty in training a well performing machine learning or deep learning model. To overcome this challenge, we have adopted the concept of meta-learning. Meta-learning using prototypical networks (metric-based meta-learning) has been proven to be well-fit for few-shot problems without severe overfitting. We leverage the quick adaptation power of prototypical networks for emotion recognition in the scarcity of such diverse samples. We have used CMU Multi-PIE dataset which contains images with partial occlusions, varying head-poses and illumination levels for training and evaluating the model. For testing the adaptability of the system to intra-class and inter-dataset variations, AffectNet face database images have been used. The proposed method is named as ERMOPI (Emotion Recognition using Meta-learning across Occlusion, Pose and Illumination) which performs emotion recognition from facial expressions using meta-learning approach for still images and it is robust to partial occlusions, varying head poses and illumination levels which is the novelty of this work. The key benefit is the usage of less number of training samples compared to the existing work in emotion recognition and achieved comparable results with the state-of-the-art approaches. The proposed method achieved 90% accuracy for CMU Multi-PIE database images and 68% accuracy for AffectNet database images.

Keywords: illumination; emotion recognition; meta learning

Journal Title: Journal of King Saud University - Computer and Information Sciences
Year Published: 2021

Link to full text (if available)


Share on Social Media:                               Sign Up to like & get
recommendations!

Related content

More Information              News              Social Media              Video              Recommended



                Click one of the above tabs to view related content.