Institut Jožef Stefan, Jamova Cesta 39, 1000 Ljubljana, Slovenia.
Faculty of Electrical Engineering, University of Ljubljana, Tržaška Cesta 25, 1000 Ljubljana, Slovenia.
Sensors (Basel). 2021 Dec 23;22(1):73. doi: 10.3390/s22010073.
In the past few years, there has been a leap from traditional palmprint recognition methodologies, which use handcrafted features, to deep-learning approaches that are able to automatically learn feature representations from the input data. However, the information that is extracted from such deep-learning models typically corresponds to the global image appearance, where only the most discriminative cues from the input image are considered. This characteristic is especially problematic when data is acquired in unconstrained settings, as in the case of contactless palmprint recognition systems, where visual artifacts caused by elastic deformations of the palmar surface are typically present in spatially local parts of the captured images. In this study we address the problem of elastic deformations by introducing a new approach to based on a novel CNN model, designed as a two-path architecture, where one path processes the input in a holistic manner, while the second path extracts local information from smaller image patches sampled from the input image. As elastic deformations can be assumed to most significantly affect the global appearance, while having a lesser impact on spatially local image areas, the local processing path addresses the issues related to elastic deformations thereby supplementing the information from the global processing path. The model is trained with a learning objective that combines the Additive Angular Margin (ArcFace) Loss and the well-known center loss. By using the proposed model design, the discriminative power of the learned image representation is significantly enhanced compared to standard holistic models, which, as we show in the experimental section, leads to state-of-the-art performance for contactless palmprint recognition. Our approach is tested on two publicly available contactless palmprint datasets-namely, IITD and CASIA-and is demonstrated to perform favorably against state-of-the-art methods from the literature. The source code for the proposed model is made publicly available.
在过去的几年中,传统的掌纹识别方法已经取得了飞跃,这些方法使用手工制作的特征,可以自动从输入数据中学习特征表示。然而,从这样的深度学习模型中提取的信息通常对应于全局图像外观,其中只考虑输入图像的最具区分性的线索。当数据是在不受约束的环境中采集的,例如在非接触式掌纹识别系统中,由于手掌表面的弹性变形而导致的视觉伪影通常存在于捕获图像的空间局部部分,这种特性尤其成问题。在本研究中,我们通过引入一种新的基于新型 CNN 模型的方法来解决弹性变形问题,该模型设计为双路径架构,其中一路以整体方式处理输入,而另一路从输入图像中较小的图像块中提取局部信息。由于弹性变形可以被认为对全局外观影响最大,而对空间局部图像区域的影响较小,因此局部处理路径解决了与弹性变形相关的问题,从而补充了来自全局处理路径的信息。该模型通过将加性角度 Margin(ArcFace)损失和著名的中心损失相结合的学习目标进行训练。通过使用所提出的模型设计,与标准整体模型相比,所学习的图像表示的判别能力得到了显著增强,正如我们在实验部分所示,这导致了非接触式掌纹识别的最新性能。我们的方法在两个公开的非接触式掌纹数据集上进行了测试,即 IITD 和 CASIA,并被证明优于文献中的最新方法。所提出的模型的源代码是公开的。