Wu Wei, Zhang Yuan, Li Yunpeng, Li Chuanyang
School of Information Engineering, Shenyang University, Shenyang 110044, China.
Math Biosci Eng. 2024 Feb 1;21(2):3129-3145. doi: 10.3934/mbe.2024139.
Biometric authentication prevents losses from identity misuse in the artificial intelligence (AI) era. The fusion method integrates palmprint and palm vein features, leveraging their stability and security and enhances counterfeiting prevention and overall system efficiency through multimodal correlations. However, most of the existing multi-modal palmprint and palm vein feature extraction methods extract only feature information independently from different modalities, ignoring the importance of the correlation between different modal samples in the class to the improvement of recognition performance. In this study, we addressed the aforementioned issues by proposing a feature-level joint learning fusion approach for palmprint and palm vein recognition based on modal correlations. The method employs a sparse unsupervised projection algorithm with a "purification matrix" constraint to enhance consistency in intra-modal features. This minimizes data reconstruction errors, eliminating noise and extracting compact, and discriminative representations. Subsequently, the partial least squares algorithm extracts high grayscale variance and category correlation subspaces from each modality. A weighted sum is then utilized to dynamically optimize the contribution of each modality for effective classification recognition. Experimental evaluations conducted for five multimodal databases, composed of six unimodal databases including the Chinese Academy of Sciences multispectral palmprint and palm vein databases, yielded equal error rates (EER) of 0.0173%, 0.0192%, 0.0059%, 0.0010%, and 0.0008%. Compared to some classical methods for palmprint and palm vein fusion recognition, the algorithm significantly improves recognition performance. The algorithm is suitable for identity recognition in scenarios with high security requirements and holds practical value.
生物特征认证可防止人工智能(AI)时代身份盗用造成的损失。融合方法整合了掌纹和掌静脉特征,利用其稳定性和安全性,并通过多模态相关性增强防伪能力和整体系统效率。然而,现有的大多数多模态掌纹和掌静脉特征提取方法仅从不同模态中独立提取特征信息,而忽略了同一类别中不同模态样本之间的相关性对提高识别性能的重要性。在本研究中,我们通过提出一种基于模态相关性的掌纹和掌静脉识别特征级联合学习融合方法来解决上述问题。该方法采用具有“净化矩阵”约束的稀疏无监督投影算法来增强模态内特征的一致性。这将数据重建误差降至最低,消除噪声并提取紧凑且有区分性的表示。随后,偏最小二乘算法从每个模态中提取高灰度方差和类别相关子空间。然后利用加权和动态优化每个模态的贡献以进行有效的分类识别。对由包括中国科学院多光谱掌纹和掌静脉数据库在内的六个单模态数据库组成的五个多模态数据库进行的实验评估,得到的等错误率(EER)分别为0.0173%、0.0192%、0.0059%、0.0010%和0.0008%。与一些用于掌纹和掌静脉融合识别的经典方法相比,该算法显著提高了识别性能。该算法适用于高安全要求场景下的身份识别,具有实际应用价值。