Recently, contactless bimodal palmprint recognition technology has attracted increased attention due to the COVID-19 pandemic. Many dual-camera-based sensors have been proposed to capture palm vein and palmprint images synchronously. However,… Click to show full abstract
Recently, contactless bimodal palmprint recognition technology has attracted increased attention due to the COVID-19 pandemic. Many dual-camera-based sensors have been proposed to capture palm vein and palmprint images synchronously. However, translations between captured palmprint and palm vein images differ depending on the distance between the hand and the sensors. To address this issue, we designed a low-cost method to align the bimodal palm regions for current dual-camera systems. In this study, we first implemented a contactless palm image acquisition device with a dual-camera module and a single-point time of flight (TOF) ranging sensor. Using this device, we collected a dataset named DCPD under different distances and light source intensities from 271 different palms. Then, a bimodal palm image alignment method is proposed based on the imaging and ranging models. After the system model is calibrated, the translation between the visible light and infrared light palm regions can be estimated quickly based on the palm distance. Finally, we designed a convolutional neural network (CNN) to effectively extract the fine- and coarse-grained palm features. Compared to widely used existing methods, the proposed networks achieved the lowest equal error rate (EER) on the Tongji, IITD, and DCPD datasets, and the average time cost of the system to perform one-time identification is approximately 0.15 s. The experimental results indicate that the proposed methods achieved high efficiency and comparable accuracy. In addition, the system’s EER and rank-1 on the DCPD dataset were 0.304% and 98.66%, respectively.
               
Click one of the above tabs to view related content.