• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于多任务卷积神经网络的鲁棒人脸识别。

Robust face recognition based on multi-task convolutional neural network.

机构信息

School of Electronic Information, Jiangsu University of Science and Technology, Zhenjiang 212003, China.

出版信息

Math Biosci Eng. 2021 Aug 5;18(5):6638-6651. doi: 10.3934/mbe.2021329.

DOI:10.3934/mbe.2021329
PMID:34517549
Abstract

PURPOSE

Due to the lack of prior knowledge of face images, large illumination changes, and complex backgrounds, the accuracy of face recognition is low. To address this issue, we propose a face detection and recognition algorithm based on multi-task convolutional neural network (MTCNN).

METHODS

In our paper, MTCNN mainly uses three cascaded networks, and adopts the idea of candidate box plus classifier to perform fast and efficient face recognition. The model is trained on a database of 50 faces we have collected, and Peak Signal-to-Noise Ratio (PSNR), Structural Similarity Index Measurement (SSIM), and receiver operating characteristic (ROC) curve are used to analyse MTCNN, Region-CNN (R-CNN) and Faster R-CNN.

RESULTS

The average PSNR of this technique is 1.24 dB higher than that of R-CNN and 0.94 dB higher than that of Faster R-CNN. The average SSIM value of MTCNN is 10.3% higher than R-CNN and 8.7% higher than Faster R-CNN. The Area Under Curve (AUC) of MTCNN is 97.56%, the AUC of R-CNN is 91.24%, and the AUC of Faster R-CNN is 92.01%. MTCNN has the best comprehensive performance in face recognition. For the face images with defective features, MTCNN still has the best effect.

CONCLUSIONS

This algorithm can effectively improve face recognition to a certain extent. The accuracy rate and the reduction of the false detection rate of face detection can not only be better used in key places, ensure the safety of property and security of the people, improve safety, but also better reduce the waste of human resources and improve efficiency.

摘要

目的

由于缺乏人脸图像的先验知识、光照变化大以及背景复杂等因素,人脸识别的准确性较低。为了解决这个问题,我们提出了一种基于多任务卷积神经网络(MTCNN)的人脸检测和识别算法。

方法

在我们的论文中,MTCNN 主要使用了三个级联网络,并采用候选框加分类器的思想进行快速高效的人脸识别。该模型在我们收集的 50 个人脸数据库上进行训练,并使用峰值信噪比(PSNR)、结构相似性指数测量(SSIM)和接收者操作特征(ROC)曲线来分析 MTCNN、区域卷积神经网络(R-CNN)和更快的 R-CNN。

结果

该技术的平均 PSNR 比 R-CNN 高 1.24 分贝,比更快的 R-CNN 高 0.94 分贝。MTCNN 的平均 SSIM 值比 R-CNN 高 10.3%,比更快的 R-CNN 高 8.7%。MTCNN 的曲线下面积(AUC)为 97.56%,R-CNN 的 AUC 为 91.24%,更快的 R-CNN 的 AUC 为 92.01%。MTCNN 在人脸识别方面具有最佳的综合性能。对于特征有缺陷的人脸图像,MTCNN 仍然具有最佳的效果。

结论

该算法可以在一定程度上有效提高人脸识别的准确率和降低误检率,不仅可以更好地应用于关键场所,保证财产和人员安全,提高安全性,还可以更好地减少人力资源的浪费,提高效率。

相似文献

1
Robust face recognition based on multi-task convolutional neural network.基于多任务卷积神经网络的鲁棒人脸识别。
Math Biosci Eng. 2021 Aug 5;18(5):6638-6651. doi: 10.3934/mbe.2021329.
2
Research on Classroom Emotion Recognition Algorithm Based on Visual Emotion Classification.基于视觉情感分类的课堂情感识别算法研究。
Comput Intell Neurosci. 2022 Aug 8;2022:6453499. doi: 10.1155/2022/6453499. eCollection 2022.
3
Comparison of Subjective Facial Emotion Recognition and "Facial Emotion Recognition Based on Multi-Task Cascaded Convolutional Network Face Detection" between Patients with Schizophrenia and Healthy Participants.精神分裂症患者与健康参与者之间主观面部情绪识别与“基于多任务级联卷积网络面部检测的面部情绪识别”的比较。
Healthcare (Basel). 2022 Nov 24;10(12):2363. doi: 10.3390/healthcare10122363.
4
Facial Expressions Recognition for Human-Robot Interaction Using Deep Convolutional Neural Networks with Rectified Adam Optimizer.基于修正 Adam 优化器的深度卷积神经网络的人机交互中的面部表情识别。
Sensors (Basel). 2020 Apr 23;20(8):2393. doi: 10.3390/s20082393.
5
Driver Fatigue Detection Based on Convolutional Neural Networks Using EM-CNN.基于使用EM-CNN的卷积神经网络的驾驶员疲劳检测
Comput Intell Neurosci. 2020 Nov 18;2020:7251280. doi: 10.1155/2020/7251280. eCollection 2020.
6
Automatic extraction of cancer registry reportable information from free-text pathology reports using multitask convolutional neural networks.使用多任务卷积神经网络从自由文本病理报告中自动提取癌症登记报告信息。
J Am Med Inform Assoc. 2020 Jan 1;27(1):89-98. doi: 10.1093/jamia/ocz153.
7
Correction of out-of-FOV motion artifacts using convolutional neural network.使用卷积神经网络校正视场外运动伪影。
Magn Reson Imaging. 2020 Sep;71:93-102. doi: 10.1016/j.mri.2020.05.004. Epub 2020 May 25.
8
Efficient estimation of pharmacokinetic parameters from breast dynamic contrast-enhanced MRI based on a convolutional neural network for predicting molecular subtypes.基于卷积神经网络预测分子亚型的乳腺动态对比增强 MRI 药代动力学参数的高效估算。
Phys Med Biol. 2023 Dec 4;68(24). doi: 10.1088/1361-6560/ad0e39.
9
Effective Face Detector Based on YOLOv5 and Superresolution Reconstruction.基于 YOLOv5 和超分辨率重建的有效人脸检测器。
Comput Math Methods Med. 2021 Nov 16;2021:7748350. doi: 10.1155/2021/7748350. eCollection 2021.
10
[Application of convolutional neural network to risk evaluation of positive circumferential resection margin of rectal cancer by magnetic resonance imaging].卷积神经网络在直肠癌磁共振成像环周切缘阳性风险评估中的应用
Zhonghua Wei Chang Wai Ke Za Zhi. 2020 Jun 25;23(6):572-577. doi: 10.3760/cma.j.cn.441530-20191023-00460.

引用本文的文献

1
Development and validation of a risk prediction model for cage subsidence after instrumented posterior lumbar fusion based on machine learning: a retrospective observational cohort study.基于机器学习的后路腰椎融合内固定术后椎间融合器下沉风险预测模型的开发与验证:一项回顾性观察队列研究
Front Med (Lausanne). 2023 Jul 21;10:1196384. doi: 10.3389/fmed.2023.1196384. eCollection 2023.
2
Comparison of Subjective Facial Emotion Recognition and "Facial Emotion Recognition Based on Multi-Task Cascaded Convolutional Network Face Detection" between Patients with Schizophrenia and Healthy Participants.精神分裂症患者与健康参与者之间主观面部情绪识别与“基于多任务级联卷积网络面部检测的面部情绪识别”的比较。
Healthcare (Basel). 2022 Nov 24;10(12):2363. doi: 10.3390/healthcare10122363.
3
Deep Learning-Based Pain Classifier Based on the Facial Expression in Critically Ill Patients.基于危重症患者面部表情的深度学习疼痛分类器
Front Med (Lausanne). 2022 Mar 17;9:851690. doi: 10.3389/fmed.2022.851690. eCollection 2022.