• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种用于识别人类面部表情的实时自动化系统。

A real-time automated system for the recognition of human facial expressions.

作者信息

Anderson Keith, McOwan Peter W

机构信息

Department of Computer Science, Queen Mary, University of London, London E1 4NS, UK.

出版信息

IEEE Trans Syst Man Cybern B Cybern. 2006 Feb;36(1):96-105. doi: 10.1109/tsmcb.2005.854502.

DOI:10.1109/tsmcb.2005.854502
PMID:16468569
Abstract

A fully automated, multistage system for real-time recognition of facial expression is presented. The system uses facial motion to characterize monochrome frontal views of facial expressions and is able to operate effectively in cluttered and dynamic scenes, recognizing the six emotions universally associated with unique facial expressions, namely happiness, sadness, disgust, surprise, fear, and anger. Faces are located using a spatial ratio template tracker algorithm. Optical flow of the face is subsequently determined using a real-time implementation of a robust gradient model. The expression recognition system then averages facial velocity information over identified regions of the face and cancels out rigid head motion by taking ratios of this averaged motion. The motion signatures produced are then classified using Support Vector Machines as either nonexpressive or as one of the six basic emotions. The completed system is demonstrated in two simple affective computing applications that respond in real-time to the facial expressions of the user, thereby providing the potential for improvements in the interaction between a computer user and technology.

摘要

本文提出了一种用于实时识别面部表情的全自动多阶段系统。该系统利用面部运动来表征面部表情的单色正面视图,并且能够在杂乱和动态的场景中有效运行,识别与独特面部表情普遍相关的六种情绪,即快乐、悲伤、厌恶、惊讶、恐惧和愤怒。使用空间比率模板跟踪器算法定位面部。随后,使用稳健梯度模型的实时实现来确定面部的光流。表情识别系统然后对面部识别区域的面部速度信息进行平均,并通过取该平均运动的比率来消除刚性头部运动。然后使用支持向量机将产生的运动特征分类为无表情或六种基本情绪之一。完整的系统在两个简单的情感计算应用中得到了演示,这些应用实时响应用户的面部表情,从而为改善计算机用户与技术之间的交互提供了潜力。

相似文献

1
A real-time automated system for the recognition of human facial expressions.一种用于识别人类面部表情的实时自动化系统。
IEEE Trans Syst Man Cybern B Cybern. 2006 Feb;36(1):96-105. doi: 10.1109/tsmcb.2005.854502.
2
Face description with local binary patterns: application to face recognition.基于局部二值模式的面部描述:在人脸识别中的应用。
IEEE Trans Pattern Anal Mach Intell. 2006 Dec;28(12):2037-41. doi: 10.1109/TPAMI.2006.244.
3
Dynamics of facial expression: recognition of facial actions and their temporal segments from face profile image sequences.面部表情动态:从面部轮廓图像序列中识别面部动作及其时间片段。
IEEE Trans Syst Man Cybern B Cybern. 2006 Apr;36(2):433-49. doi: 10.1109/tsmcb.2005.859075.
4
Multiple nose region matching for 3D face recognition under varying facial expression.不同面部表情下用于3D人脸识别的多鼻部区域匹配
IEEE Trans Pattern Anal Mach Intell. 2006 Oct;28(10):1695-700. doi: 10.1109/TPAMI.2006.210.
5
A mosaicing scheme for pose-invariant face recognition.一种用于姿态不变人脸识别的拼接方案。
IEEE Trans Syst Man Cybern B Cybern. 2007 Oct;37(5):1212-25. doi: 10.1109/tsmcb.2007.903537.
6
Facial expression recognition in image sequences using geometric deformation features and Support Vector Machines.使用几何变形特征和支持向量机的图像序列中的面部表情识别。
IEEE Trans Image Process. 2007 Jan;16(1):172-87. doi: 10.1109/tip.2006.884954.
7
Active and dynamic information fusion for facial expression understanding from image sequences.用于从图像序列理解面部表情的主动动态信息融合
IEEE Trans Pattern Anal Mach Intell. 2005 May;27(5):699-714. doi: 10.1109/TPAMI.2005.93.
8
Optimal linear combination of facial regions for improving identification performance.用于提高识别性能的面部区域的最佳线性组合。
IEEE Trans Syst Man Cybern B Cybern. 2007 Oct;37(5):1138-48. doi: 10.1109/tsmcb.2007.895325.
9
Fusing face-verification algorithms and humans.融合面部验证算法与人类。
IEEE Trans Syst Man Cybern B Cybern. 2007 Oct;37(5):1149-55. doi: 10.1109/tsmcb.2007.907034.
10
Effective feature extraction in high-dimensional space.高维空间中的有效特征提取。
IEEE Trans Syst Man Cybern B Cybern. 2008 Dec;38(6):1652-6. doi: 10.1109/TSMCB.2008.927276.

引用本文的文献

1
Analysis of frequency domain features for the classification of evoked emotions using EEG signals.利用脑电图信号对诱发情绪进行分类的频域特征分析。
Exp Brain Res. 2025 Feb 14;243(3):65. doi: 10.1007/s00221-025-07002-1.
2
Characteristics of vocal cues, facial action units, and emotions that distinguish high from low self-protection participants engaged in self-protective response to self-criticizing.区分在自我批评的自我保护反应中高自我保护参与者和低自我保护参与者的语音线索、面部动作单元及情绪特征。
Front Psychol. 2025 Jan 15;15:1363993. doi: 10.3389/fpsyg.2024.1363993. eCollection 2024.
3
Cross subject emotion identification from multichannel EEG sub-bands using Tsallis entropy feature and KNN classifier.
基于Tsallis熵特征和K近邻分类器的多通道脑电子带跨主体情绪识别
Brain Inform. 2024 Mar 5;11(1):7. doi: 10.1186/s40708-024-00220-3.
4
Robust Human Face Emotion Classification Using Triplet-Loss-Based Deep CNN Features and SVM.基于三元损失的深度卷积神经网络特征和 SVM 的鲁棒人脸表情分类。
Sensors (Basel). 2023 May 15;23(10):4770. doi: 10.3390/s23104770.
5
STGATE: Spatial-temporal graph attention network with a transformer encoder for EEG-based emotion recognition.STGATE:基于脑电图的情感识别的带变压器编码器的时空图注意力网络。
Front Hum Neurosci. 2023 Apr 13;17:1169949. doi: 10.3389/fnhum.2023.1169949. eCollection 2023.
6
EEG-Based Emotion Classification Using Stacking Ensemble Approach.基于 EEG 的情绪分类的堆叠集成方法。
Sensors (Basel). 2022 Nov 6;22(21):8550. doi: 10.3390/s22218550.
7
Automated detection of smiles as discrete episodes.自动检测离散的微笑事件。
J Oral Rehabil. 2022 Dec;49(12):1173-1180. doi: 10.1111/joor.13378. Epub 2022 Oct 20.
8
The Application of Electroencephalogram in Driving Safety: Current Status and Future Prospects.脑电图在驾驶安全中的应用:现状与未来展望。
Front Psychol. 2022 Jul 22;13:919695. doi: 10.3389/fpsyg.2022.919695. eCollection 2022.
9
EEG Emotion Classification Network Based on Attention Fusion of Multi-Channel Band Features.基于多通道频带特征注意力融合的 EEG 情绪分类网络。
Sensors (Basel). 2022 Jul 13;22(14):5252. doi: 10.3390/s22145252.
10
AttendAffectNet-Emotion Prediction of Movie Viewers Using Multimodal Fusion with Self-Attention.使用带有自注意力机制的多模态融合方法预测电影观众的 AttendAffectNet-Emotion。
Sensors (Basel). 2021 Dec 14;21(24):8356. doi: 10.3390/s21248356.