Department of Radiology, Section of Biomedical Image Analysis, University of Pennsylvania, Philadelphia, PA 19104, USA.
J Neurosci Methods. 2011 Sep 15;200(2):237-56. doi: 10.1016/j.jneumeth.2011.06.023. Epub 2011 Jun 29.
Facial expression is widely used to evaluate emotional impairment in neuropsychiatric disorders. Ekman and Friesen's Facial Action Coding System (FACS) encodes movements of individual facial muscles from distinct momentary changes in facial appearance. Unlike facial expression ratings based on categorization of expressions into prototypical emotions (happiness, sadness, anger, fear, disgust, etc.), FACS can encode ambiguous and subtle expressions, and therefore is potentially more suitable for analyzing the small differences in facial affect. However, FACS rating requires extensive training, and is time consuming and subjective thus prone to bias. To overcome these limitations, we developed an automated FACS based on advanced computer science technology. The system automatically tracks faces in a video, extracts geometric and texture features, and produces temporal profiles of each facial muscle movement. These profiles are quantified to compute frequencies of single and combined Action Units (AUs) in videos, and they can facilitate a statistical study of large populations in disorders known to impact facial expression. We derived quantitative measures of flat and inappropriate facial affect automatically from temporal AU profiles. Applicability of the automated FACS was illustrated in a pilot study, by applying it to data of videos from eight schizophrenia patients and controls. We created temporal AU profiles that provided rich information on the dynamics of facial muscle movements for each subject. The quantitative measures of flatness and inappropriateness showed clear differences between patients and the controls, highlighting their potential in automatic and objective quantification of symptom severity.
面部表情广泛用于评估神经精神障碍中的情绪障碍。Ekman 和 Friesen 的面部动作编码系统 (FACS) 从面部外观的瞬时变化中对个体面部肌肉的运动进行编码。与基于表情分类为典型情绪(如快乐、悲伤、愤怒、恐惧、厌恶等)的面部表情评分不同,FACS 可以编码模糊和微妙的表情,因此更适合分析面部情感的微小差异。然而,FACS 评分需要广泛的培训,并且耗时且主观,因此容易出现偏差。为了克服这些限制,我们开发了一种基于先进计算机科学技术的自动化 FACS。该系统自动跟踪视频中的人脸,提取几何和纹理特征,并生成每个面部肌肉运动的时间轮廓。这些轮廓被量化,以计算视频中单个和组合动作单元 (AU) 的频率,它们可以促进对已知影响面部表情的疾病中大量人群的统计研究。我们从时间 AU 轮廓中自动推导出扁平且不适当的面部表情的定量指标。自动化 FACS 的适用性在一项试点研究中得到了说明,我们将其应用于来自 8 名精神分裂症患者和对照组的视频数据。我们为每个受试者创建了提供面部肌肉运动动态丰富信息的时间 AU 轮廓。平坦度和不适度的定量指标在患者和对照组之间存在明显差异,突出了它们在自动和客观量化症状严重程度方面的潜力。