• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于动态纹理的面部动作识别及其时间模型方法。

A dynamic texture-based approach to recognition of facial actions and their temporal models.

机构信息

Queen Mary University of London, UK.

出版信息

IEEE Trans Pattern Anal Mach Intell. 2010 Nov;32(11):1940-54. doi: 10.1109/TPAMI.2010.50.

DOI:10.1109/TPAMI.2010.50
PMID:20847386
Abstract

In this work, we propose a dynamic texture-based approach to the recognition of facial Action Units (AUs, atomic facial gestures) and their temporal models (i.e., sequences of temporal segments: neutral, onset, apex, and offset) in near-frontal-view face videos. Two approaches to modeling the dynamics and the appearance in the face region of an input video are compared: an extended version of Motion History Images and a novel method based on Nonrigid Registration using Free-Form Deformations (FFDs). The extracted motion representation is used to derive motion orientation histogram descriptors in both the spatial and temporal domain. Per AU, a combination of discriminative, frame-based GentleBoost ensemble learners and dynamic, generative Hidden Markov Models detects the presence of the AU in question and its temporal segments in an input image sequence. When tested for recognition of all 27 lower and upper face AUs, occurring alone or in combination in 264 sequences from the MMI facial expression database, the proposed method achieved an average event recognition accuracy of 89.2 percent for the MHI method and 94.3 percent for the FFD method. The generalization performance of the FFD method has been tested using the Cohn-Kanade database. Finally, we also explored the performance on spontaneous expressions in the Sensitive Artificial Listener data set.

摘要

在这项工作中,我们提出了一种基于动态纹理的方法,用于识别近正面人脸视频中的面部动作单元(AU,原子面部表情)及其时间模型(即时间片段序列:中性、起始、顶点和结束)。比较了两种用于建模输入视频中面部区域动态和外观的方法:运动历史图像的扩展版本和基于自由变形(FFD)的非刚性配准的新方法。提取的运动表示用于在空间和时间域中得出运动方向直方图描述符。对于每个 AU,基于判别、基于帧的 GentleBoost 集成学习器和动态、生成性隐马尔可夫模型的组合检测输入图像序列中存在的 AU 及其时间片段。当针对 MMI 面部表情数据库中的 264 个序列中的 27 个上下脸 AU 进行识别时,该方法使用 MHI 方法的平均事件识别准确率为 89.2%,使用 FFD 方法的平均事件识别准确率为 94.3%。还使用 Cohn-Kanade 数据库测试了 FFD 方法的泛化性能。最后,我们还在敏感人工听众数据集上探索了对自发表情的性能。

相似文献

1
A dynamic texture-based approach to recognition of facial actions and their temporal models.基于动态纹理的面部动作识别及其时间模型方法。
IEEE Trans Pattern Anal Mach Intell. 2010 Nov;32(11):1940-54. doi: 10.1109/TPAMI.2010.50.
2
Fully automatic recognition of the temporal phases of facial actions.面部动作时间阶段的全自动识别。
IEEE Trans Syst Man Cybern B Cybern. 2012 Feb;42(1):28-43. doi: 10.1109/TSMCB.2011.2163710. Epub 2011 Sep 15.
3
Active and dynamic information fusion for facial expression understanding from image sequences.用于从图像序列理解面部表情的主动动态信息融合
IEEE Trans Pattern Anal Mach Intell. 2005 May;27(5):699-714. doi: 10.1109/TPAMI.2005.93.
4
Dynamic texture recognition using local binary patterns with an application to facial expressions.基于局部二值模式的动态纹理识别及其在面部表情中的应用
IEEE Trans Pattern Anal Mach Intell. 2007 Jun;29(6):915-28. doi: 10.1109/TPAMI.2007.1110.
5
Automatic temporal segment detection and affect recognition from face and body display.基于面部和身体表现的自动时间片段检测与情感识别
IEEE Trans Syst Man Cybern B Cybern. 2009 Feb;39(1):64-84. doi: 10.1109/TSMCB.2008.927269. Epub 2008 Aug 12.
6
A dynamic appearance descriptor approach to facial actions temporal modeling.一种用于面部动作时间建模的动态外观描述符方法。
IEEE Trans Cybern. 2014 Feb;44(2):161-74. doi: 10.1109/TCYB.2013.2249063.
7
Dynamics of facial expression: recognition of facial actions and their temporal segments from face profile image sequences.面部表情动态:从面部轮廓图像序列中识别面部动作及其时间片段。
IEEE Trans Syst Man Cybern B Cybern. 2006 Apr;36(2):433-49. doi: 10.1109/tsmcb.2005.859075.
8
Gabor-based kernel PCA with fractional power polynomial models for face recognition.基于伽柏的核主成分分析与分数幂多项式模型用于人脸识别。
IEEE Trans Pattern Anal Mach Intell. 2004 May;26(5):572-81. doi: 10.1109/TPAMI.2004.1273927.
9
Facial action unit recognition by exploiting their dynamic and semantic relationships.通过利用面部动作单元的动态和语义关系进行面部动作单元识别。
IEEE Trans Pattern Anal Mach Intell. 2007 Oct;29(10):1683-99. doi: 10.1109/TPAMI.2007.1094.
10
Modeling, clustering, and segmenting video with mixtures of dynamic textures.使用动态纹理混合对视频进行建模、聚类和分割。
IEEE Trans Pattern Anal Mach Intell. 2008 May;30(5):909-26. doi: 10.1109/TPAMI.2007.70738.

引用本文的文献

1
A Review of 25 Spontaneous and Dynamic Facial Expression Databases of Basic Emotions.25个基本情绪的自发和动态面部表情数据库综述。
Affect Sci. 2025 Jan 15;6(2):380-394. doi: 10.1007/s42761-024-00289-3. eCollection 2025 Jun.
2
Quantifying dynamic facial expressions under naturalistic conditions.定量自然条件下的动态面部表情。
Elife. 2022 Aug 31;11:e79581. doi: 10.7554/eLife.79581.
3
Learning Pain from Action Unit Combinations: A Weakly Supervised Approach via Multiple Instance Learning.从动作单元组合中学习疼痛:一种通过多实例学习的弱监督方法。
IEEE Trans Affect Comput. 2022 Jan-Mar;13(1):135-146. doi: 10.1109/taffc.2019.2949314. Epub 2019 Oct 30.
4
Context-Aware Emotion Recognition in the Wild Using Spatio-Temporal and Temporal-Pyramid Models.基于时空和时频金字塔模型的自然场景上下文感知情感识别。
Sensors (Basel). 2021 Mar 27;21(7):2344. doi: 10.3390/s21072344.
5
FusionSense: Emotion Classification Using Feature Fusion of Multimodal Data and Deep Learning in a Brain-Inspired Spiking Neural Network.FusionSense:基于脑启发的尖峰神经网络的多模态数据特征融合和深度学习的情感分类。
Sensors (Basel). 2020 Sep 17;20(18):5328. doi: 10.3390/s20185328.
6
Crossing Domains for AU Coding: Perspectives, Approaches, and Measures.用于情感单元编码的跨领域研究:观点、方法与措施
IEEE Trans Biom Behav Identity Sci. 2020 Apr;2(2):158-171. doi: 10.1109/tbiom.2020.2977225. Epub 2020 Mar 3.
7
A Comprehensive Performance Evaluation of Deformable Face Tracking "In-the-Wild".对“野外”可变形面部跟踪的综合性能评估
Int J Comput Vis. 2018;126(2):198-232. doi: 10.1007/s11263-017-0999-5. Epub 2017 Feb 25.
8
D-PAttNet: Dynamic Patch-Attentive Deep Network for Action Unit Detection.D-PAttNet:用于动作单元检测的动态补丁注意力深度网络
Front Comput Sci. 2019 Nov;1. doi: 10.3389/fcomp.2019.00011. Epub 2019 Nov 29.
9
Cross-domain AU Detection: Domains, Learning Approaches, and Measures.跨域情感单元检测:领域、学习方法与度量
Proc Int Conf Autom Face Gesture Recognit. 2019 May;2019. doi: 10.1109/FG.2019.8756543. Epub 2019 Jul 11.
10
Learning Facial Action Units from Web Images with Scalable Weakly Supervised Clustering.通过可扩展的弱监督聚类从网络图像中学习面部动作单元
Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit. 2018 Jun;2018:2090-2099. doi: 10.1109/CVPR.2018.00223. Epub 2018 Dec 17.