Wang Peng, Barrett Frederick, Martin Elizabeth, Milonova Marina, Gur Raquel E, Gur Ruben C, Kohler Christian, Verma Ragini
Section of Biomedical Image Analysis, Department of Radiology, University of Pennsylvania, Philadelphia, PA 19104, USA.
J Neurosci Methods. 2008 Feb 15;168(1):224-38. doi: 10.1016/j.jneumeth.2007.09.030. Epub 2007 Oct 5.
Deficits in emotional expression are prominent in several neuropsychiatric disorders, including schizophrenia. Available clinical facial expression evaluations provide subjective and qualitative measurements, which are based on static 2D images that do not capture the temporal dynamics and subtleties of expression changes. Therefore, there is a need for automated, objective and quantitative measurements of facial expressions captured using videos. This paper presents a computational framework that creates probabilistic expression profiles for video data and can potentially help to automatically quantify emotional expression differences between patients with neuropsychiatric disorders and healthy controls. Our method automatically detects and tracks facial landmarks in videos, and then extracts geometric features to characterize facial expression changes. To analyze temporal facial expression changes, we employ probabilistic classifiers that analyze facial expressions in individual frames, and then propagate the probabilities throughout the video to capture the temporal characteristics of facial expressions. The applications of our method to healthy controls and case studies of patients with schizophrenia and Asperger's syndrome demonstrate the capability of the video-based expression analysis method in capturing subtleties of facial expression. Such results can pave the way for a video-based method for quantitative analysis of facial expressions in clinical research of disorders that cause affective deficits.
情绪表达缺陷在包括精神分裂症在内的多种神经精神疾病中很突出。现有的临床面部表情评估提供的是主观和定性的测量,这些测量基于静态二维图像,无法捕捉表情变化的时间动态和细微差别。因此,需要对使用视频捕捉的面部表情进行自动化、客观和定量的测量。本文提出了一个计算框架,该框架可为视频数据创建概率性表情概况,并有可能有助于自动量化神经精神疾病患者与健康对照之间的情绪表达差异。我们的方法自动检测并跟踪视频中的面部标志点,然后提取几何特征以表征面部表情变化。为了分析面部表情的时间变化,我们采用概率分类器,该分类器分析单个帧中的面部表情,然后在整个视频中传播概率以捕捉面部表情的时间特征。我们的方法在健康对照以及精神分裂症和阿斯伯格综合征患者的案例研究中的应用,证明了基于视频的表情分析方法捕捉面部表情细微差别的能力。这些结果可为基于视频的方法在导致情感缺陷的疾病临床研究中对面部表情进行定量分析铺平道路。