Facial expression recognition has been an emerging and long standing research problem in last two decades. Histograms of oriented gradients (HOGs) have proven to be an effective descriptor for preserving… Click to show full abstract
Facial expression recognition has been an emerging and long standing research problem in last two decades. Histograms of oriented gradients (HOGs) have proven to be an effective descriptor for preserving the local information using orientation density distribution and gradient of the edge. A robust powerful approach of HOG features has been investigated in this paper. In particular, this paper highlights that the transformation of HOG features to frequency domain can make this descriptor one of the most suitable to characterize illumination and orientation invariant facial expressions. Discrete cosine transform (DCT) is applied to transform the features into frequency domain and obtain the most important discriminant features. Finally, these features are fed to the well-known classifier to determine the underlying emotions from expressive facial images. To validate the proposed framework, we used MMI, Extended Cohn-Kanade dataset (CK+) and cross dataset. The results indicate that the proposed framework is better as compared to other methods in terms of classification accuracy rate with utilization of minimum number of features.
               
Click one of the above tabs to view related content.