In recent past, considerable amount of research has been done to increase the performance of a face authentication system in uncontrolled environment such as illumination. However, the performance has not… Click to show full abstract
In recent past, considerable amount of research has been done to increase the performance of a face authentication system in uncontrolled environment such as illumination. However, the performance has not been improved significantly since visible face images are dependent of illumination. To overcome the limitation of visible face images, researchers are using infrared (IR) face images. However, it also does not completely independent of illumination. Image fusion of visible and thermal face images is an alternative in research community nowadays. In this work, a fusion method is introduced to fuse visible and IR images for face authentication. The proposed fusion method relies on translation invariant À-trous wavelet transform and fractal dimension using differential box counting method. Five popular fusion metrics namely, ratio of spatial frequency error, normalized mutual information, edge information, universal image quality index, extended frequency comparison index are considered to measure the effectiveness of the proposed fusion algorithm quantitatively over four state-of-the-art methods. A new similarity measure is also proposed to check how close a fused face image from others are. All the experiments are performed on three databases namely, IRIS benchmark face database, UGC-JU face database and SCface face database. All the results depict that the proposed fusion method along with similarity measure for face authentication outperforms all the four state-of-the-art methods in terms of accuracy, precision and recall.
               
Click one of the above tabs to view related content.