• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

部署机器学习技术进行人类情感检测。

Deploying Machine Learning Techniques for Human Emotion Detection.

机构信息

Department of Embedded Network Systems Technology, Faculty of Artificial Intelligence, Kafrelsheikh University, Kafr El-Sheikh, Egypt.

Department of Information Technology, College of Computer and Information Sciences, Princess Nourah Bint Abdulrahman University, Riyadh 84428, Saudi Arabia.

出版信息

Comput Intell Neurosci. 2022 Feb 2;2022:8032673. doi: 10.1155/2022/8032673. eCollection 2022.

DOI:10.1155/2022/8032673
PMID:35154306
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8828335/
Abstract

Emotion recognition is one of the trending research fields. It is involved in several applications. Its most interesting applications include robotic vision and interactive robotic communication. Human emotions can be detected using both speech and visual modalities. Facial expressions can be considered as ideal means for detecting the persons' emotions. This paper presents a real-time approach for implementing emotion detection and deploying it in the robotic vision applications. The proposed approach consists of four phases: preprocessing, key point generation, key point selection and angular encoding, and classification. The main idea is to generate key points using MediaPipe face mesh algorithm, which is based on real-time deep learning. In addition, the generated key points are encoded using a sequence of carefully designed mesh generator and angular encoding modules. Furthermore, feature decomposition is performed using Principal Component Analysis (PCA). This phase is deployed to enhance the accuracy of emotion detection. Finally, the decomposed features are enrolled into a Machine Learning (ML) technique that depends on a Support Vector Machine (SVM), k-Nearest Neighbor (KNN), Naïve Bayes (NB), Logistic Regression (LR), or Random Forest (RF) classifier. Moreover, we deploy a Multilayer Perceptron (MLP) as an efficient deep neural network technique. The presented techniques are evaluated on different datasets with different evaluation metrics. The simulation results reveal that they achieve a superior performance with a human emotion detection accuracy of 97%, which ensures superiority among the efforts in this field.

摘要

情感识别是当前热门的研究领域之一。它涉及到多个应用。最有趣的应用包括机器人视觉和交互式机器人通信。人类的情感可以通过语音和视觉两种模态来检测。面部表情可以被视为检测人类情感的理想手段。本文提出了一种实时的情感检测方法,并将其应用于机器人视觉应用中。所提出的方法包括四个阶段:预处理、关键点生成、关键点选择和角度编码以及分类。主要思想是使用基于实时深度学习的 MediaPipe 面部网格算法生成关键点。此外,生成的关键点使用精心设计的网格生成器和角度编码模块序列进行编码。此外,使用主成分分析(PCA)进行特征分解。该阶段用于提高情感检测的准确性。最后,将分解后的特征注册到基于支持向量机(SVM)、k-最近邻(KNN)、朴素贝叶斯(NB)、逻辑回归(LR)或随机森林(RF)分类器的机器学习(ML)技术中。此外,我们还部署了多层感知机(MLP)作为一种有效的深度神经网络技术。所提出的技术在不同的数据集上进行了评估,并采用了不同的评估指标。仿真结果表明,它们在人类情感检测准确率达到 97%的情况下表现出了优异的性能,这确保了它们在该领域的领先地位。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/297e/8828335/8e8001f086cd/CIN2022-8032673.016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/297e/8828335/374ec703dcc7/CIN2022-8032673.001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/297e/8828335/894b6056d1d8/CIN2022-8032673.002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/297e/8828335/132b817f16a3/CIN2022-8032673.003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/297e/8828335/f7e0bdd76fdf/CIN2022-8032673.004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/297e/8828335/c1154bf8a563/CIN2022-8032673.005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/297e/8828335/822d0c701e6b/CIN2022-8032673.006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/297e/8828335/5ac6b1307b06/CIN2022-8032673.007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/297e/8828335/8b7ace800361/CIN2022-8032673.008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/297e/8828335/4947138653b2/CIN2022-8032673.009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/297e/8828335/77736fab10d2/CIN2022-8032673.010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/297e/8828335/43c818faee9e/CIN2022-8032673.011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/297e/8828335/83e1fc9bffc3/CIN2022-8032673.012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/297e/8828335/12c7f0e55d9b/CIN2022-8032673.013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/297e/8828335/a5ffb25b0527/CIN2022-8032673.014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/297e/8828335/64d98a28c436/CIN2022-8032673.015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/297e/8828335/8e8001f086cd/CIN2022-8032673.016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/297e/8828335/374ec703dcc7/CIN2022-8032673.001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/297e/8828335/894b6056d1d8/CIN2022-8032673.002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/297e/8828335/132b817f16a3/CIN2022-8032673.003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/297e/8828335/f7e0bdd76fdf/CIN2022-8032673.004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/297e/8828335/c1154bf8a563/CIN2022-8032673.005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/297e/8828335/822d0c701e6b/CIN2022-8032673.006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/297e/8828335/5ac6b1307b06/CIN2022-8032673.007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/297e/8828335/8b7ace800361/CIN2022-8032673.008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/297e/8828335/4947138653b2/CIN2022-8032673.009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/297e/8828335/77736fab10d2/CIN2022-8032673.010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/297e/8828335/43c818faee9e/CIN2022-8032673.011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/297e/8828335/83e1fc9bffc3/CIN2022-8032673.012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/297e/8828335/12c7f0e55d9b/CIN2022-8032673.013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/297e/8828335/a5ffb25b0527/CIN2022-8032673.014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/297e/8828335/64d98a28c436/CIN2022-8032673.015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/297e/8828335/8e8001f086cd/CIN2022-8032673.016.jpg

相似文献

1
Deploying Machine Learning Techniques for Human Emotion Detection.部署机器学习技术进行人类情感检测。
Comput Intell Neurosci. 2022 Feb 2;2022:8032673. doi: 10.1155/2022/8032673. eCollection 2022.
2
A Comparison of Machine Learning Algorithms and Feature Sets for Automatic Vocal Emotion Recognition in Speech.机器学习算法和特征集在语音自动情感识别中的比较
Sensors (Basel). 2022 Oct 6;22(19):7561. doi: 10.3390/s22197561.
3
Comparison of Classification Success Rates of Different Machine Learning Algorithms in the Diagnosis of Breast Cancer.不同机器学习算法在乳腺癌诊断中的分类成功率比较。
Asian Pac J Cancer Prev. 2022 Oct 1;23(10):3287-3297. doi: 10.31557/APJCP.2022.23.10.3287.
4
Comparison of machine learning approaches for radioisotope identification using NaI(TI) gamma-ray spectrum.基于 NaI(TI) 伽马射线能谱的放射性同位素识别的机器学习方法比较。
Appl Radiat Isot. 2022 Aug;186:110212. doi: 10.1016/j.apradiso.2022.110212. Epub 2022 Apr 14.
5
Computer-assisted lip diagnosis on Traditional Chinese Medicine using multi-class support vector machines.基于多类支持向量机的中医唇诊计算机辅助诊断。
BMC Complement Altern Med. 2012 Aug 16;12:127. doi: 10.1186/1472-6882-12-127.
6
An Aggregated Mutual Information Based Feature Selection with Machine Learning Methods for Enhancing IoT Botnet Attack Detection.基于聚合互信息的特征选择与机器学习方法在增强物联网僵尸网络攻击检测中的应用。
Sensors (Basel). 2021 Dec 28;22(1):185. doi: 10.3390/s22010185.
7
A novel speech emotion recognition method based on feature construction and ensemble learning.基于特征构建和集成学习的新型语音情感识别方法。
PLoS One. 2022 Aug 15;17(8):e0267132. doi: 10.1371/journal.pone.0267132. eCollection 2022.
8
Recognition of Emotion Intensities Using Machine Learning Algorithms: A Comparative Study.基于机器学习算法的情感强度识别:一项比较研究。
Sensors (Basel). 2019 Apr 21;19(8):1897. doi: 10.3390/s19081897.
9
Wrapper method for feature selection to classify cardiac arrhythmia.用于心律失常分类的特征选择包装方法。
Annu Int Conf IEEE Eng Med Biol Soc. 2017 Jul;2017:3656-3659. doi: 10.1109/EMBC.2017.8037650.
10
Efficient Model for Coronary Artery Disease Diagnosis: A Comparative Study of Several Machine Learning Algorithms.用于冠心病诊断的高效模型:几种机器学习算法的比较研究。
J Healthc Eng. 2022 Oct 18;2022:5359540. doi: 10.1155/2022/5359540. eCollection 2022.

引用本文的文献

1
Application of Multiple Deep Learning Architectures for Emotion Classification Based on Facial Expressions.基于面部表情的多深度学习架构在情感分类中的应用
Sensors (Basel). 2025 Feb 27;25(5):1478. doi: 10.3390/s25051478.
2
Feasibility study of emotion mimicry analysis in human-machine interaction.人机交互中情感模仿分析的可行性研究。
Sci Rep. 2025 Jan 31;15(1):3859. doi: 10.1038/s41598-025-87688-z.
3
Multimodal Technologies for Remote Assessment of Neurological and Mental Health.多模态技术在神经和心理健康远程评估中的应用。

本文引用的文献

1
Facial Expression Recognition of Instructor Using Deep Features and Extreme Learning Machine.基于深度特征和极限学习机的教师面部表情识别
Comput Intell Neurosci. 2021 Apr 30;2021:5570870. doi: 10.1155/2021/5570870. eCollection 2021.
2
Ventilation Diagnosis of Angle Grinder Using Thermal Imaging.利用热成像技术对角磨机进行通风诊断。
Sensors (Basel). 2021 Apr 18;21(8):2853. doi: 10.3390/s21082853.
3
Efficient video-based breathing pattern and respiration rate monitoring for remote health monitoring.用于远程健康监测的基于视频的高效呼吸模式和呼吸率监测。
J Speech Lang Hear Res. 2024 Nov 7;67(11):4233-4245. doi: 10.1044/2024_JSLHR-24-00142. Epub 2024 Jul 10.
4
An appraisal-based chain-of-emotion architecture for affective language model game agents.基于评价的情感语言模型游戏代理链式情感架构。
PLoS One. 2024 May 10;19(5):e0301033. doi: 10.1371/journal.pone.0301033. eCollection 2024.
5
A Multimodal Feature Fusion Framework for Sleep-Deprived Fatigue Detection to Prevent Accidents.用于睡眠剥夺疲劳检测以预防事故的多模态特征融合框架。
Sensors (Basel). 2023 Apr 20;23(8):4129. doi: 10.3390/s23084129.
6
Automatic Detection of Horner Syndrome by Using Facial Images.基于面部图像的霍纳综合征自动检测
J Healthc Eng. 2022 Nov 21;2022:8670350. doi: 10.1155/2022/8670350. eCollection 2022.
7
Kids' Emotion Recognition Using Various Deep-Learning Models with Explainable AI.利用具有可解释性人工智能的各种深度学习模型进行儿童情绪识别。
Sensors (Basel). 2022 Oct 21;22(20):8066. doi: 10.3390/s22208066.
8
Emotion Recognizing by a Robotic Solution Initiative (EMOTIVE Project).情感识别的机器人解决方案倡议(EMOTIVE 项目)。
Sensors (Basel). 2022 Apr 8;22(8):2861. doi: 10.3390/s22082861.
J Opt Soc Am A Opt Image Sci Vis. 2020 Nov 1;37(11):C118-C124. doi: 10.1364/JOSAA.399284.
4
Reliable Crowdsourcing and Deep Locality-Preserving Learning for Unconstrained Facial Expression Recognition.无约束面部表情识别的可靠众包和深度保局学习。
IEEE Trans Image Process. 2019 Jan;28(1):356-370. doi: 10.1109/TIP.2018.2868382. Epub 2018 Sep 3.
5
Human-Robot Interaction: Status and Challenges.人机交互:现状与挑战。
Hum Factors. 2016 Jun;58(4):525-32. doi: 10.1177/0018720816644364. Epub 2016 Apr 20.
6
Block-Row Sparse Multiview Multilabel Learning for Image Classification.基于块-行稀疏多视图多标签学习的图像分类方法。
IEEE Trans Cybern. 2016 Feb;46(2):450-61. doi: 10.1109/TCYB.2015.2403356. Epub 2015 Feb 27.
7
Augmented reality-based self-facial modeling to promote the emotional expression and social skills of adolescents with autism spectrum disorders.基于增强现实的自我面部建模以促进自闭症谱系障碍青少年的情感表达和社交技能。
Res Dev Disabil. 2015 Jan;36C:396-403. doi: 10.1016/j.ridd.2014.10.015. Epub 2014 Nov 8.
8
Learning local appearances with sparse representation for robust and fast visual tracking.基于稀疏表示学习局部外观特征的鲁棒快速视觉跟踪
IEEE Trans Cybern. 2015 Apr;45(4):663-75. doi: 10.1109/TCYB.2014.2332279. Epub 2014 Jul 10.
9
Discriminative BoW framework for mobile landmark recognition.用于移动地标识别的判别 BoW 框架。
IEEE Trans Cybern. 2014 May;44(5):695-706. doi: 10.1109/TCYB.2013.2267015. Epub 2013 Jul 3.
10
Automated Facial Action Coding System for dynamic analysis of facial expressions in neuropsychiatric disorders.用于神经精神障碍患者面部表情动态分析的自动化面部动作编码系统。
J Neurosci Methods. 2011 Sep 15;200(2):237-56. doi: 10.1016/j.jneumeth.2011.06.023. Epub 2011 Jun 29.