Compared with uni-modal biometrics systems, multimodal biometrics systems using multiple sources of information for establishing an individual’s identity have received considerable attention recently. However, most traditional multimodal biometrics techniques generally… Click to show full abstract
Compared with uni-modal biometrics systems, multimodal biometrics systems using multiple sources of information for establishing an individual’s identity have received considerable attention recently. However, most traditional multimodal biometrics techniques generally extract features from each modality independently, ignoring the implicit associations between different modalities. In addition, most existing work uses hand-crafted descriptors that are difficult to capture the latent semantic structure. This paper proposes to learn the sparse and discriminative multimodal feature codes (SDMFCs) for multimodal finger recognition, which simultaneously takes into account the specific and common information among inter-modality and intra-modality. Specifically, given the multimodal finger images, we first establish the local difference matrix to capture informative texture features in local patches. Then, we aim to jointly learn discriminative and compact binary codes by constraining the observations from multiple modalities. Finally, we develop a novel SDMFC-based multimodal finger recognition framework, which integrates the local histograms of each division block in the learned binary codes together for classification. Experimental results on three commonly used finger databases demonstrate the effectiveness and robustness of the proposed framework in multimodal biometrics tasks.
               
Click one of the above tabs to view related content.