• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于区域地标的特征提取,采用尺度不变特征变换(SIFT)、加速鲁棒特征(SURF)和定向 FAST 旋转 BRIEF(ORB)特征描述符,从二维/三维面部图像中识别同卵双胞胎。

Region-wise landmarks-based feature extraction employing SIFT, SURF, and ORB feature descriptors to recognize Monozygotic twins from 2D/3D Facial Images.

作者信息

Sanil Gangothri, Prakasha K Krishna, Prabhu Srikanth, Nayak Vinod, Jayakala Aparna

机构信息

Information Communication Technology, Manipal Institute of Technology (MIT), Manipal Academy of Higher Education, Manipal, 576104, India.

Information Communication Technology, Manipal Academy of Higher Education (MAHE),, Manipal Academy of Higher Education, Manipal Institute of Technology (MIT), Manipal, 576104, India.

出版信息

F1000Res. 2025 Jun 20;14:444. doi: 10.12688/f1000research.162911.2. eCollection 2025.

DOI:10.12688/f1000research.162911.2
PMID:40787613
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12334920/
Abstract

BACKGROUND

In computer vision and image processing, face recognition is increasingly popular field of research that identifies similar faces in a picture and assigns a suitable label. It is one of the desired detection techniques employed in forensics for criminal identification.

METHODS

This study explores face recognition system for monozygotic twins utilizing three widely recognized feature descriptor algorithms: Scale-Invariant Feature Transform (SIFT), Speeded-Up Robust Features (SURF), and Oriented Fast and Rotated BRIEF (ORB)-with region-specific facial landmarks. These landmarks were extracted from 468 points detected through the MediaPipe framework, which enables simultaneous recognition of multiple faces. Quantitative similarity metrics t served as inputs for four classification methods: Support Vector Machine (SVM), eXtreme Gradient Boost (XGBoost), Light Gradient Boost Machine (LGBM), and Nearest Centroid (NC). The effectiveness of these algorithms was tested and validated using challenging ND Twins and 3D TEC datasets, the most difficult data sets for 2D and 3D face recognition research at Notre Dame University.

RESULTS

Testing with Notre Dame University's challenging ND Twins and 3D TEC datasets revealed significant performance differences. Results demonstrated that 2D facial images achieved notably higher recognition accuracy than 3D images. The 2D images produced accuracy of 88% (SVM), 83% (LGBM), 83% (XGBoost), and 79% (NC). In contrast, the 3D TEC dataset yielded a lower accuracy r of 74%, 72%, 72%, and 70%, with the same classifiers.

CONCLUSION

The hybrid feature extraction approach proved most effective, with maximum accuracy rates reaching 88% for 2D facial images and 74% for 3D facial images. This work contributes significantly to forensic science by enhancing the reliability of facial recognition systems when confronted with indistinguishable facial characteristics of monozygotic twins.

摘要

背景

在计算机视觉和图像处理领域,人脸识别是一个越来越受欢迎的研究领域,它能识别图片中的相似面孔并赋予合适的标签。它是法医用于刑事鉴定的理想检测技术之一。

方法

本研究利用三种广泛认可的特征描述符算法探索同卵双胞胎的人脸识别系统:尺度不变特征变换(SIFT)、加速稳健特征(SURF)和具有区域特定面部标志点的定向快速旋转BRIEF(ORB)。这些标志点是从通过MediaPipe框架检测到的468个点中提取的,该框架能够同时识别多张面孔。定量相似性指标作为四种分类方法的输入:支持向量机(SVM)、极端梯度提升(XGBoost)、轻量级梯度提升机(LGBM)和最近质心(NC)。使用具有挑战性的圣母大学双胞胎(ND Twins)和3D TEC数据集对这些算法的有效性进行了测试和验证,这是圣母大学二维和三维人脸识别研究中最具难度的数据集。

结果

使用圣母大学具有挑战性的ND Twins和3D TEC数据集进行测试,结果显示出显著的性能差异。结果表明,二维面部图像的识别准确率明显高于三维图像。二维图像使用SVM的准确率为88%,LGBM为83%,XGBoost为83%,NC为79%。相比之下,3D TEC数据集使用相同分类器的准确率较低,分别为74%、72%、72%和70%。

结论

混合特征提取方法被证明是最有效的,二维面部图像的最高准确率达到88%,三维面部图像为74%。这项工作通过提高人脸识别系统在面对同卵双胞胎难以区分的面部特征时的可靠性,为法医学做出了重大贡献。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c83/12334935/282318fd4a5c/f1000research-14-182977-g0023.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c83/12334935/9bc6dc1df00c/f1000research-14-182977-g0000.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c83/12334935/3b528ac07ab8/f1000research-14-182977-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c83/12334935/5eabdc32b667/f1000research-14-182977-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c83/12334935/16bf2303181c/f1000research-14-182977-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c83/12334935/9750f64e44b3/f1000research-14-182977-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c83/12334935/518447e43646/f1000research-14-182977-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c83/12334935/d0967f0edfc3/f1000research-14-182977-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c83/12334935/f78e29c6a770/f1000research-14-182977-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c83/12334935/2435c20a8199/f1000research-14-182977-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c83/12334935/ce15985cc872/f1000research-14-182977-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c83/12334935/ec6ec3e25f5e/f1000research-14-182977-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c83/12334935/597fcab290a2/f1000research-14-182977-g0011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c83/12334935/44188fda829a/f1000research-14-182977-g0012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c83/12334935/efe4c73705c6/f1000research-14-182977-g0013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c83/12334935/e82505c5c3e3/f1000research-14-182977-g0014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c83/12334935/bf7090e687f9/f1000research-14-182977-g0015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c83/12334935/0b8b0c1eefdc/f1000research-14-182977-g0016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c83/12334935/1a09bdccc344/f1000research-14-182977-g0017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c83/12334935/b39e78995596/f1000research-14-182977-g0018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c83/12334935/5f9c59b6180e/f1000research-14-182977-g0019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c83/12334935/4bab0fbf1446/f1000research-14-182977-g0020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c83/12334935/282318fd4a5c/f1000research-14-182977-g0023.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c83/12334935/9bc6dc1df00c/f1000research-14-182977-g0000.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c83/12334935/3b528ac07ab8/f1000research-14-182977-g0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c83/12334935/5eabdc32b667/f1000research-14-182977-g0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c83/12334935/16bf2303181c/f1000research-14-182977-g0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c83/12334935/9750f64e44b3/f1000research-14-182977-g0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c83/12334935/518447e43646/f1000research-14-182977-g0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c83/12334935/d0967f0edfc3/f1000research-14-182977-g0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c83/12334935/f78e29c6a770/f1000research-14-182977-g0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c83/12334935/2435c20a8199/f1000research-14-182977-g0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c83/12334935/ce15985cc872/f1000research-14-182977-g0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c83/12334935/ec6ec3e25f5e/f1000research-14-182977-g0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c83/12334935/597fcab290a2/f1000research-14-182977-g0011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c83/12334935/44188fda829a/f1000research-14-182977-g0012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c83/12334935/efe4c73705c6/f1000research-14-182977-g0013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c83/12334935/e82505c5c3e3/f1000research-14-182977-g0014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c83/12334935/bf7090e687f9/f1000research-14-182977-g0015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c83/12334935/0b8b0c1eefdc/f1000research-14-182977-g0016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c83/12334935/1a09bdccc344/f1000research-14-182977-g0017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c83/12334935/b39e78995596/f1000research-14-182977-g0018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c83/12334935/5f9c59b6180e/f1000research-14-182977-g0019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c83/12334935/4bab0fbf1446/f1000research-14-182977-g0020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4c83/12334935/282318fd4a5c/f1000research-14-182977-g0023.jpg

相似文献

1
Region-wise landmarks-based feature extraction employing SIFT, SURF, and ORB feature descriptors to recognize Monozygotic twins from 2D/3D Facial Images.基于区域地标的特征提取,采用尺度不变特征变换(SIFT)、加速鲁棒特征(SURF)和定向 FAST 旋转 BRIEF(ORB)特征描述符,从二维/三维面部图像中识别同卵双胞胎。
F1000Res. 2025 Jun 20;14:444. doi: 10.12688/f1000research.162911.2. eCollection 2025.
2
Facial Landmark-Driven Keypoint Feature Extraction for Robust Facial Expression Recognition.用于鲁棒面部表情识别的面部地标驱动关键点特征提取
Sensors (Basel). 2025 Jun 16;25(12):3762. doi: 10.3390/s25123762.
3
Prescription of Controlled Substances: Benefits and Risks管制药品的处方:益处与风险
4
Classification of finger movements through optimal EEG channel and feature selection.通过最优脑电图通道和特征选择对手指运动进行分类。
Front Hum Neurosci. 2025 Jul 16;19:1633910. doi: 10.3389/fnhum.2025.1633910. eCollection 2025.
5
Artificial intelligence for diagnosing exudative age-related macular degeneration.人工智能在渗出性年龄相关性黄斑变性诊断中的应用。
Cochrane Database Syst Rev. 2024 Oct 17;10(10):CD015522. doi: 10.1002/14651858.CD015522.pub2.
6
Non-orthogonal kV imaging guided patient position verification in non-coplanar radiation therapy with dataset-free implicit neural representation.在无数据集隐式神经表示的非共面放射治疗中,基于非正交千伏成像的患者体位验证
Med Phys. 2025 May 19. doi: 10.1002/mp.17885.
7
Constructing a Classification Model for Cervical Cancer Tumor Tissue and Normal Tissue Based on CT Radiomics.基于 CT 影像组学构建宫颈癌肿瘤组织与正常组织分类模型
Technol Cancer Res Treat. 2024 Jan-Dec;23:15330338241298554. doi: 10.1177/15330338241298554.
8
Proposal for Using AI to Assess Clinical Data Integrity and Generate Metadata: Algorithm Development and Validation.关于使用人工智能评估临床数据完整性并生成元数据的提案:算法开发与验证
JMIR Med Inform. 2025 Jun 30;13:e60204. doi: 10.2196/60204.
9
Data Mining-Based Model for Computer-Aided Diagnosis of Autism and Gelotophobia: Mixed Methods Deep Learning Approach.基于数据挖掘的自闭症和恐笑症计算机辅助诊断模型:混合方法深度学习途径
JMIR Form Res. 2025 Aug 13;9:e72115. doi: 10.2196/72115.
10
Breast lesion classification via colorized mammograms and transfer learning in a novel CAD framework.在一个新型计算机辅助检测(CAD)框架中,通过彩色乳腺X光片和迁移学习进行乳腺病变分类。
Sci Rep. 2025 Jul 11;15(1):25071. doi: 10.1038/s41598-025-10896-0.

本文引用的文献

1
Twin Identification over Viewpoint Change: A Deep Convolutional Neural Network Surpasses Humans.跨视角变化的双胞胎识别:深度卷积神经网络超越人类。
ACM Trans Appl Percept. 2023 Jul;20(3). doi: 10.1145/3609224.
2
Facial morphology differences in monozygotic twins: a retrospective stereophotogrammetric study.同卵双胞胎的面部形态差异:一项回顾性立体摄影研究。
Angle Orthod. 2023 Nov 1;93(6):706-711. doi: 10.2319/120722-840.1.
3
68 landmarks are efficient for 3D face alignment: what about more?: 3D face alignment method applied to face recognition.
68个地标点用于3D面部对齐很高效:那更多地标点呢?:应用于人脸识别的3D面部对齐方法
Multimed Tools Appl. 2023 Apr 1:1-35. doi: 10.1007/s11042-023-14770-x.
4
Statistical local descriptors for face recognition: a comprehensive study.用于人脸识别的统计局部描述符:一项综合研究。
Multimed Tools Appl. 2023 Mar 14:1-20. doi: 10.1007/s11042-023-14482-2.
5
Learning Descriptors Invariance through Equivalence Relations within Manifold: A New Approach to Expression Invariant 3D Face Recognition.通过流形内的等价关系学习描述符不变性:一种用于表情不变3D人脸识别的新方法。
J Imaging. 2020 Oct 22;6(11):112. doi: 10.3390/jimaging6110112.