• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

面部表情识别与预测系统。

Facial Image expression recognition and prediction system.

机构信息

Department of Mathematics, SAS, Vellore Institute of Technology, Chennai, 600127, Tamilnadu, India.

出版信息

Sci Rep. 2024 Nov 12;14(1):27760. doi: 10.1038/s41598-024-79146-z.

DOI:10.1038/s41598-024-79146-z
PMID:39533039
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11557705/
Abstract

Facial expression recognition system is an advanced technology that allows machines to recognize human emotions based on their facial expressions. In order to develop a robust prediction model, this research work proposes three distinct architectural models to produce a facial expression prediction system that looks like this: The first model is on using a support vector machine to carry out a classification task. As a follow-up to the second model, an attempt was made to create a Convolution Neural Network (CNN) using the VGG-NET (Visual Geometry Group Network). Following analysis of the results, an attempt was made to enhance the outcome using the third model, which used convolutional sequential layers linked to seven distinct expressions, and an inference was drawn based on loss and accuracy metric behavior. We will use a dataset of human picture facial images in this research, which has more than 35500 facial photographs and represents seven different types of facial expressions. We will analyze our data and make every effort to remove as much noise as we can before feeding that information to our model. We use the confusion matrix to assess the model's performance after it has been implemented effectively. To demonstrate the effectiveness of our model architecture, we will generate bar graphs and scatter plots for each model to display model loss and accuracy. The output of this model is visualized with actual class and predictive class and the result has a graphical representation for each and every output facial Images which makes our recognition system user-friendly.

摘要

面部表情识别系统是一项先进的技术,它可以让机器根据面部表情识别人类的情绪。为了开发一个强大的预测模型,这项研究工作提出了三个不同的架构模型,以生成一个这样的面部表情预测系统:第一个模型是使用支持向量机进行分类任务。作为第二个模型的后续,尝试使用 VGG-NET(视觉几何组网络)创建卷积神经网络 (CNN)。在分析结果之后,尝试使用第三个模型来增强结果,该模型使用与七个不同表情相关联的卷积序列层,并根据损失和准确性度量行为进行推断。在这项研究中,我们将使用人类图片面部图像数据集,其中包含超过 35500 张面部照片,代表七种不同类型的面部表情。我们将分析我们的数据,并尽最大努力在将信息输入我们的模型之前消除尽可能多的噪声。我们使用混淆矩阵来评估模型在有效实施后的性能。为了展示我们的模型架构的有效性,我们将为每个模型生成条形图和散点图,以显示模型的损失和准确性。该模型的输出通过实际类和预测类进行可视化,并且为每个输出面部图像提供图形表示,这使得我们的识别系统用户友好。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/622e25d24c57/41598_2024_79146_Fig34_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/de11788ed2cf/41598_2024_79146_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/0bb1620fdfef/41598_2024_79146_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/d4b530c7b17f/41598_2024_79146_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/7a107b4448dc/41598_2024_79146_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/40ea000e6b95/41598_2024_79146_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/59eb0b01187a/41598_2024_79146_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/091b33245c37/41598_2024_79146_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/091a12f0bb06/41598_2024_79146_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/cbe520ef99a2/41598_2024_79146_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/8457ff2dca3c/41598_2024_79146_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/7b32ad41a3ac/41598_2024_79146_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/9b1b5442eafc/41598_2024_79146_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/5adf46c4d3d5/41598_2024_79146_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/42ecd09e1b23/41598_2024_79146_Fig14_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/27898c5ee98b/41598_2024_79146_Fig15_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/517e03163fc7/41598_2024_79146_Fig16_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/7923719b1038/41598_2024_79146_Fig17_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/e52f8b1b7d16/41598_2024_79146_Fig18_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/2d1631b0d1bf/41598_2024_79146_Fig19_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/e6f8178b11f1/41598_2024_79146_Fig20_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/1707214f17b1/41598_2024_79146_Fig21_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/cd9ce63ac56c/41598_2024_79146_Fig22_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/61abe48ccd13/41598_2024_79146_Fig23_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/a5df37fcf6ab/41598_2024_79146_Fig24_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/d21aa0afaba9/41598_2024_79146_Fig25_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/5b3c6d44eb84/41598_2024_79146_Fig26_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/c157bb5270b8/41598_2024_79146_Fig27_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/1f801ff47d02/41598_2024_79146_Fig28_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/5473904d74bb/41598_2024_79146_Fig29_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/368b647a970a/41598_2024_79146_Fig30_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/24beca293715/41598_2024_79146_Fig31_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/6c940b7d6a21/41598_2024_79146_Fig32_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/e3a71f6ad72a/41598_2024_79146_Fig33_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/622e25d24c57/41598_2024_79146_Fig34_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/de11788ed2cf/41598_2024_79146_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/0bb1620fdfef/41598_2024_79146_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/d4b530c7b17f/41598_2024_79146_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/7a107b4448dc/41598_2024_79146_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/40ea000e6b95/41598_2024_79146_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/59eb0b01187a/41598_2024_79146_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/091b33245c37/41598_2024_79146_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/091a12f0bb06/41598_2024_79146_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/cbe520ef99a2/41598_2024_79146_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/8457ff2dca3c/41598_2024_79146_Fig10_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/7b32ad41a3ac/41598_2024_79146_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/9b1b5442eafc/41598_2024_79146_Fig12_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/5adf46c4d3d5/41598_2024_79146_Fig13_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/42ecd09e1b23/41598_2024_79146_Fig14_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/27898c5ee98b/41598_2024_79146_Fig15_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/517e03163fc7/41598_2024_79146_Fig16_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/7923719b1038/41598_2024_79146_Fig17_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/e52f8b1b7d16/41598_2024_79146_Fig18_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/2d1631b0d1bf/41598_2024_79146_Fig19_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/e6f8178b11f1/41598_2024_79146_Fig20_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/1707214f17b1/41598_2024_79146_Fig21_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/cd9ce63ac56c/41598_2024_79146_Fig22_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/61abe48ccd13/41598_2024_79146_Fig23_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/a5df37fcf6ab/41598_2024_79146_Fig24_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/d21aa0afaba9/41598_2024_79146_Fig25_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/5b3c6d44eb84/41598_2024_79146_Fig26_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/c157bb5270b8/41598_2024_79146_Fig27_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/1f801ff47d02/41598_2024_79146_Fig28_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/5473904d74bb/41598_2024_79146_Fig29_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/368b647a970a/41598_2024_79146_Fig30_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/24beca293715/41598_2024_79146_Fig31_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/6c940b7d6a21/41598_2024_79146_Fig32_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/e3a71f6ad72a/41598_2024_79146_Fig33_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/050d/11557705/622e25d24c57/41598_2024_79146_Fig34_HTML.jpg

相似文献

1
Facial Image expression recognition and prediction system.面部表情识别与预测系统。
Sci Rep. 2024 Nov 12;14(1):27760. doi: 10.1038/s41598-024-79146-z.
2
Facial Expressions Recognition for Human-Robot Interaction Using Deep Convolutional Neural Networks with Rectified Adam Optimizer.基于修正 Adam 优化器的深度卷积神经网络的人机交互中的面部表情识别。
Sensors (Basel). 2020 Apr 23;20(8):2393. doi: 10.3390/s20082393.
3
TriCAFFNet: A Tri-Cross-Attention Transformer with a Multi-Feature Fusion Network for Facial Expression Recognition.TriCAFFNet:一种具有多特征融合网络的三交叉注意力转换器,用于面部表情识别。
Sensors (Basel). 2024 Aug 21;24(16):5391. doi: 10.3390/s24165391.
4
Enhanced Hybrid Vision Transformer with Multi-Scale Feature Integration and Patch Dropping for Facial Expression Recognition.基于多尺度特征融合和补丁丢弃的增强型混合视觉 Transformer 在面部表情识别中的应用。
Sensors (Basel). 2024 Jun 26;24(13):4153. doi: 10.3390/s24134153.
5
Image-based facial emotion recognition using convolutional neural network on emognition dataset.基于卷积神经网络的 Emotion 数据集的图像面部情绪识别。
Sci Rep. 2024 Jun 23;14(1):14429. doi: 10.1038/s41598-024-65276-x.
6
CapsField: Light Field-Based Face and Expression Recognition in the Wild Using Capsule Routing.基于胶囊路由的野外光场人脸和表情识别
IEEE Trans Image Process. 2021;30:2627-2642. doi: 10.1109/TIP.2021.3054476. Epub 2021 Feb 5.
7
Micro-expression recognition based on multi-scale 3D residual convolutional neural network.基于多尺度 3D 残差卷积神经网络的微表情识别。
Math Biosci Eng. 2024 Mar 1;21(4):5007-5031. doi: 10.3934/mbe.2024221.
8
Four-layer ConvNet to facial emotion recognition with minimal epochs and the significance of data diversity.四层卷积神经网络,使用最少的 epoch 进行面部情绪识别,以及数据多样性的重要性。
Sci Rep. 2022 Apr 28;12(1):6991. doi: 10.1038/s41598-022-11173-0.
9
Two-Stream Attention Network for Pain Recognition from Video Sequences.基于双流注意力网络的视频序列疼痛识别
Sensors (Basel). 2020 Feb 4;20(3):839. doi: 10.3390/s20030839.
10
Enhancing Facial Expression Recognition through Light Field Cameras.通过光场相机增强面部表情识别。
Sensors (Basel). 2024 Sep 3;24(17):5724. doi: 10.3390/s24175724.

引用本文的文献

1
Multiscale wavelet attention convolutional network for facial expression recognition.用于面部表情识别的多尺度小波注意力卷积网络。
Sci Rep. 2025 Jul 1;15(1):22219. doi: 10.1038/s41598-025-07416-5.

本文引用的文献

1
Automated Facial Expression Recognition Framework Using Deep Learning.基于深度学习的自动化面部表情识别框架。
J Healthc Eng. 2022 Mar 31;2022:5707930. doi: 10.1155/2022/5707930. eCollection 2022.
2
A face recognition software framework based on principal component analysis.基于主成分分析的人脸识别软件框架。
PLoS One. 2021 Jul 22;16(7):e0254965. doi: 10.1371/journal.pone.0254965. eCollection 2021.
3
Bioinspired Scene Classification by Deep Active Learning With Remote Sensing Applications.基于深度学习的主动学习在遥感场景分类中的应用
IEEE Trans Cybern. 2022 Jul;52(7):5682-5694. doi: 10.1109/TCYB.2020.2981480. Epub 2022 Jul 4.
4
Training confounder-free deep learning models for medical applications.为医学应用训练无混杂因素的深度学习模型。
Nat Commun. 2020 Nov 26;11(1):6010. doi: 10.1038/s41467-020-19784-9.