Tian Ying-Li, Kanade Takeo, Cohn Jeffrey F
Robotics Institute, Carnegie Mellon University, Pittsburgh, PA 15213.
Robotics Institute, Carnegie Mellon University, Pittsburgh, and the Department of Psychology, University of Pittsburgh, Pittsburgh, PA 15260.
IEEE Trans Pattern Anal Mach Intell. 2001 Feb;23(2):97-115. doi: 10.1109/34.908962.
Most automatic expression analysis systems attempt to recognize a small set of prototypic expressions, such as happiness, anger, surprise, and fear. Such prototypic expressions, however, occur rather infrequently. Human emotions and intentions are more often communicated by changes in one or a few discrete facial features. In this paper, we develop an Automatic Face Analysis (AFA) system to analyze facial expressions based on both permanent facial features (brows, eyes, mouth) and transient facial features (deepening of facial furrows) in a nearly frontal-view face image sequence. The AFA system recognizes fine-grained changes in facial expression into action units (AUs) of the Facial Action Coding System (FACS), instead of a few prototypic expressions. Multistate face and facial component models are proposed for tracking and modeling the various facial features, including lips, eyes, brows, cheeks, and furrows. During tracking, detailed parametric descriptions of the facial features are extracted. With these parameters as the inputs, a group of action units (neutral expression, six upper face AUs and 10 lower face AUs) are recognized whether they occur alone or in combinations. The system has achieved average recognition rates of 96.4 percent (95.4 percent if neutral expressions are excluded) for upper face AUs and 96.7 percent (95.6 percent with neutral expressions excluded) for lower face AUs. The generalizability of the system has been tested by using independent image databases collected and FACS-coded for ground-truth by different research teams.
大多数自动表情分析系统都试图识别一小部分典型表情,如快乐、愤怒、惊讶和恐惧。然而,这类典型表情出现的频率相当低。人类的情感和意图更多是通过一个或几个离散面部特征的变化来传达的。在本文中,我们开发了一种自动面部分析(AFA)系统,用于基于近正面人脸图像序列中的永久性面部特征(眉毛、眼睛、嘴巴)和瞬态面部特征(面部皱纹加深)来分析面部表情。AFA系统将面部表情的细微变化识别为面部动作编码系统(FACS)的动作单元(AU),而不是少数几种典型表情。我们提出了多状态人脸和面部组件模型,用于跟踪和建模各种面部特征,包括嘴唇、眼睛、眉毛、脸颊和皱纹。在跟踪过程中,提取面部特征的详细参数描述。以这些参数作为输入,识别一组动作单元(中性表情、六个上半脸AU和十个下半脸AU),无论它们是单独出现还是组合出现。该系统对上半脸AU的平均识别率达到了96.4%(排除中性表情后为95.4%),对下半脸AU的平均识别率达到了96.7%(排除中性表情后为95.6%)。该系统的通用性已通过使用不同研究团队收集并进行FACS编码作为真实数据的独立图像数据库进行了测试。