Generally, in-the-wild emotions are complex in nature. They often occur in combinations of multiple basic emotions, such as fear, happy, disgust, anger, sadness and surprise. Unlike the basic emotions, annotation… Click to show full abstract
Generally, in-the-wild emotions are complex in nature. They often occur in combinations of multiple basic emotions, such as fear, happy, disgust, anger, sadness and surprise. Unlike the basic emotions, annotation of complex emotions, such as pain, is a time-consuming and expensive exercise. Moreover, there is an increasing demand for profiling such complex emotions as they are useful in many real-world application domains, such as medical, psychology, security and computer science. The traditional emotion recognition systems require a significant amount of annotated training samples to understand the complex emotions. This limits the direct applicability of those methods for complex emotion detection from images and videos. Therefore, it is important to learn the profile of the in-the-wild complex emotions accurately using limited annotated samples. In this paper, we propose a deep framework to incrementally and actively profile in-the-wild complex emotions, from sparse data. Our approach consists of three major components, namely a pre-processing unit, an optimization unit and an active learning unit. The pre-processing unit removes the variations present in the complex emotion images extracted from an uncontrolled environment. Our novel incremental active learning algorithm along with an optimization unit effectively predicts the complex emotions present in-the-wild. Evaluation using multiple complex emotions benchmark datasets reveals that our proposed approach performs close to the human perception capability in effectively profiling complex emotions. Further, our proposed approach shows a significant performance enhancement, in comparison with the state-of-the-art deep networks and other benchmark complex emotion profiling approaches.
               
Click one of the above tabs to view related content.