• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

面部表情识别活动的学习基础

Learning Bases of Activity for Facial Expression Recognition.

作者信息

Sariyanidi Evangelos, Gunes Hatice, Cavallaro Andrea

出版信息

IEEE Trans Image Process. 2017 Apr;26(4):1965-1978. doi: 10.1109/TIP.2017.2662237. Epub 2017 Feb 1.

DOI:10.1109/TIP.2017.2662237
PMID:28166497
Abstract

The extraction of descriptive features from sequences of faces is a fundamental problem in facial expression analysis. Facial expressions are represented by psychologists as a combination of elementary movements known as action units: each movement is localised and its intensity is specified with a score that is small when the movement is subtle and large when the movement is pronounced. Inspired by this approach, we propose a novel data-driven feature extraction framework that represents facial expression variations as a linear combination of localised basis functions, whose coefficients are proportional to movement intensity. We show that the linear basis functions required by this framework can be obtained by training a sparse linear model with Gabor phase shifts computed from facial videos. The proposed framework addresses generalisation issues that are not addressed by existing learnt representations, and achieves, with the same learning parameters, state-of-the-art results in recognising both posed expressions and spontaneous micro-expressions. This performance is confirmed even when the data used to train the model differ from test data in terms of the intensity of facial movements and frame rate.

摘要

从面部序列中提取描述性特征是面部表情分析中的一个基本问题。心理学家将面部表情表示为称为动作单元的基本运动的组合:每个运动都有其位置,并且其强度由一个分数指定,当运动很细微时分数较小,当运动很明显时分数较大。受此方法的启发,我们提出了一种新颖的数据驱动特征提取框架,该框架将面部表情变化表示为局部基函数的线性组合,其系数与运动强度成正比。我们表明,该框架所需的线性基函数可以通过训练一个稀疏线性模型来获得,该模型使用从面部视频计算出的Gabor相移。所提出的框架解决了现有学习表示未解决的泛化问题,并且在相同的学习参数下,在识别摆拍表情和自发微表情方面都取得了领先的结果。即使用于训练模型的数据在面部运动强度和帧率方面与测试数据不同,这种性能也得到了证实。

相似文献

1
Learning Bases of Activity for Facial Expression Recognition.面部表情识别活动的学习基础
IEEE Trans Image Process. 2017 Apr;26(4):1965-1978. doi: 10.1109/TIP.2017.2662237. Epub 2017 Feb 1.
2
Learning sparse representations for human action recognition.学习人类动作识别的稀疏表示。
IEEE Trans Pattern Anal Mach Intell. 2012 Aug;34(8):1576-88. doi: 10.1109/TPAMI.2011.253.
3
Context-Sensitive Dynamic Ordinal Regression for Intensity Estimation of Facial Action Units.上下文敏感动态序数回归在面部动作单元强度估计中的应用。
IEEE Trans Pattern Anal Mach Intell. 2015 May;37(5):944-58. doi: 10.1109/TPAMI.2014.2356192.
4
Multi-layer sparse representation for weighted LBP-patches based facial expression recognition.基于加权局部二值模式(LBP)块的多层稀疏表示用于面部表情识别。
Sensors (Basel). 2015 Mar 19;15(3):6719-39. doi: 10.3390/s150306719.
5
Joint Facial Action Unit Detection and Feature Fusion: A Multi-conditional Learning Approach.联合面部动作单元检测与特征融合:一种多条件学习方法。
IEEE Trans Image Process. 2016 Dec;25(12):5727-5742. doi: 10.1109/TIP.2016.2615288. Epub 2016 Oct 5.
6
Dynamic Emotional Faces Generalise Better to a New Expression but not to a New View.动态情感面孔对新表情的泛化能力更好,但对新视角的泛化能力则不然。
Sci Rep. 2016 Aug 8;6:31001. doi: 10.1038/srep31001.
7
Sparse Simultaneous Recurrent Deep Learning for Robust Facial Expression Recognition.用于鲁棒面部表情识别的稀疏同步递归深度学习
IEEE Trans Neural Netw Learn Syst. 2018 Oct;29(10):4905-4916. doi: 10.1109/TNNLS.2017.2776248. Epub 2018 Jan 5.
8
Dynamic Facial Expression Recognition With Atlas Construction and Sparse Representation.基于图谱构建和稀疏表示的动态面部表情识别
IEEE Trans Image Process. 2016 May;25(5):1977-92. doi: 10.1109/TIP.2016.2537215. Epub 2016 Mar 2.
9
A unified probabilistic framework for spontaneous facial action modeling and understanding.用于自发面部动作建模和理解的统一概率框架。
IEEE Trans Pattern Anal Mach Intell. 2010 Feb;32(2):258-73. doi: 10.1109/TPAMI.2008.293.
10
Performance of a Computational Model of the Mammalian Olfactory System哺乳动物嗅觉系统计算模型的性能

引用本文的文献

1
Beyond FACS: Data-driven Facial Expression Dictionaries, with Application to Predicting Autism.超越流式细胞术:数据驱动的面部表情词典及其在自闭症预测中的应用
Proc Int Conf Autom Face Gesture Recognit. 2025 May;2025. doi: 10.1109/fg61629.2025.11099288. Epub 2025 Aug 6.
2
Comparison of Human Experts and AI in Predicting Autism from Facial Behavior.人类专家与人工智能在通过面部行为预测自闭症方面的比较。
CEUR Workshop Proc. 2023 Mar;3359(ITAH):48-57. Epub 2023 Mar 16.
3
Expression-Guided Deep Joint Learning for Facial Expression Recognition.
基于表达引导的深度联合学习的面部表情识别。
Sensors (Basel). 2023 Aug 13;23(16):7148. doi: 10.3390/s23167148.
4
Multiparameter Space Decision Voting and Fusion Features for Facial Expression Recognition.用于面部表情识别的多参数空间决策投票与融合特征
Comput Intell Neurosci. 2020 Dec 29;2020:8886872. doi: 10.1155/2020/8886872. eCollection 2020.
5
Oral-Motor and Lexical Diversity During Naturalistic Conversations in Adults with Autism Spectrum Disorder.自闭症谱系障碍成人在自然对话中的口语运动和词汇多样性
Proc Conf. 2018 Jun;2018:147-157. doi: 10.18653/v1/w18-0616.
6
Computational Assessment of Facial Expression Production in ASD Children.自闭症儿童面部表情生成的计算评估。
Sensors (Basel). 2018 Nov 16;18(11):3993. doi: 10.3390/s18113993.