For over three decades, the Gabor-based IrisCode approach has been acknowledged as the gold standard for iris recognition, mainly due to the high entropy and binary nature of its signatures.… Click to show full abstract
For over three decades, the Gabor-based IrisCode approach has been acknowledged as the gold standard for iris recognition, mainly due to the high entropy and binary nature of its signatures. This method is highly effective in large scale environments (e.g., national ID applications), where millions of comparisons per second are required. However, it is known that non-linear deformations in the iris texture, with fibers vanishing/appearing in response to pupil dilation/contraction, often flip the signature coefficients, being the main cause for the increase of false rejections. This paper addresses this problem, describing a customised Deep Learning (DL) framework that: 1) virtually emulates the IrisCode feature encoding phase; while also 2) detects the deformations in the iris texture that may lead to bit flipping, and autonomously adapts the filter configurations for such cases. The proposed DL architecture seamlessly integrates the Gabor kernels that extract the IrisCode and a multi-scale texture analyzer, from where the biometric signatures yield. In this sense, it can be seen as an adaptive encoder that is fully compatible to the IrisCode approach, while increasing the permanence of the signatures. The experiments were conducted in two well known datasets (CASIA-Iris-Lamp and CASIA-Iris-Thousand) and showed a notorious decrease of the mean/standard deviation values of the genuines distribution, at expenses of only a marginal deterioration in the impostors scores. The resulting decision environments consistently reduce the levels of false rejections with respect to the baseline for most operating levels (e.g., over 50% at $1e^{-3}$ FAR values).
               
Click one of the above tabs to view related content.