Suppr超能文献

基于轻量化网络的无约束掌纹感兴趣区域提取方法。

An unconstrained palmprint region of interest extraction method based on lightweight networks.

机构信息

College of Electronic Engineering (College of Artificial Intelligence), South China Agricultural University, Guangzhou, China.

Guangzhou Intelligence Oriented Technology Co., Ltd., Guangzhou, China.

出版信息

PLoS One. 2024 Aug 9;19(8):e0307822. doi: 10.1371/journal.pone.0307822. eCollection 2024.

Abstract

Accurately extracting the Region of Interest (ROI) of a palm print was crucial for subsequent palm print recognition. However, under unconstrained environmental conditions, the user's palm posture and angle, as well as the background and lighting of the environment, were not controlled, making the extraction of the ROI of palm print a major challenge. In existing research methods, traditional ROI extraction methods relied on image segmentation and were difficult to apply to multiple datasets simultaneously under the aforementioned interference. However, deep learning-based methods typically did not consider the computational cost of the model and were difficult to apply to embedded devices. This article proposed a palm print ROI extraction method based on lightweight networks. Firstly, the YOLOv5-lite network was used to detect and preliminarily locate the palm, in order to eliminate most of the interference from complex backgrounds. Then, an improved UNet was used for keypoints detection. This network model reduced the number of parameters compared to the original UNet model, improved network performance, and accelerated network convergence. The output of this model combined Gaussian heatmap regression and direct regression and proposed a joint loss function based on JS loss and L2 loss for supervision. During the experiment, a mixed database consisting of 5 databases was used to meet the needs of practical applications. The results showed that the proposed method achieved an accuracy of 98.3% on the database, with an average detection time of only 28ms on the GPU, which was superior to other mainstream lightweight networks, and the model size was only 831k. In the open-set test, with a success rate of 93.4%, an average detection time of 5.95ms on the GPU, it was far ahead of the latest palm print ROI extraction algorithm and could be applied in practice.

摘要

准确提取掌纹的感兴趣区域(ROI)对于后续的掌纹识别至关重要。然而,在非约束环境条件下,用户的手掌姿势和角度以及环境的背景和光照不受控制,使得掌纹 ROI 的提取成为一个主要挑战。在现有的研究方法中,传统的 ROI 提取方法依赖于图像分割,并且难以同时应用于多个数据集,特别是在上述干扰下。然而,基于深度学习的方法通常不考虑模型的计算成本,并且难以应用于嵌入式设备。本文提出了一种基于轻量级网络的掌纹 ROI 提取方法。首先,使用 YOLOv5-lite 网络检测和初步定位手掌,以消除复杂背景的大部分干扰。然后,使用改进的 UNet 进行关键点检测。与原始 UNet 模型相比,该网络模型减少了参数数量,提高了网络性能,并加速了网络收敛。该模型的输出结合了高斯热图回归和直接回归,并提出了一种基于 JS 损失和 L2 损失的联合损失函数进行监督。在实验中,使用了一个由 5 个数据库组成的混合数据库,以满足实际应用的需求。结果表明,该方法在数据库上的准确率达到 98.3%,GPU 上的平均检测时间仅为 28ms,优于其他主流轻量级网络,模型大小仅为 831k。在开放集测试中,成功率为 93.4%,GPU 上的平均检测时间为 5.95ms,远远领先于最新的掌纹 ROI 提取算法,可实际应用。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验