Bartlett Marian Stewart, Littlewort Gwen C, Frank Mark G, Lee Kang
Institute for Neural Computation, University of California, 9500 Gilman Drive, MC 0440, La Jolla, San Diego, CA 92093-0440, USA.
Institute for Neural Computation, University of California, 9500 Gilman Drive, MC 0440, La Jolla, San Diego, CA 92093-0440, USA.
Curr Biol. 2014 Mar 31;24(7):738-43. doi: 10.1016/j.cub.2014.02.009. Epub 2014 Mar 20.
In highly social species such as humans, faces have evolved to convey rich information for social interaction, including expressions of emotions and pain [1-3]. Two motor pathways control facial movement [4-7]: a subcortical extrapyramidal motor system drives spontaneous facial expressions of felt emotions, and a cortical pyramidal motor system controls voluntary facial expressions. The pyramidal system enables humans to simulate facial expressions of emotions not actually experienced. Their simulation is so successful that they can deceive most observers [8-11]. However, machine vision may be able to distinguish deceptive facial signals from genuine facial signals by identifying the subtle differences between pyramidally and extrapyramidally driven movements. Here, we show that human observers could not discriminate real expressions of pain from faked expressions of pain better than chance, and after training human observers, we improved accuracy to a modest 55%. However, a computer vision system that automatically measures facial movements and performs pattern recognition on those movements attained 85% accuracy. The machine system's superiority is attributable to its ability to differentiate the dynamics of genuine expressions from faked expressions. Thus, by revealing the dynamics of facial action through machine vision systems, our approach has the potential to elucidate behavioral fingerprints of neural control systems involved in emotional signaling.
在人类等高度社会化的物种中,面部已经进化到能够传达丰富的社交互动信息,包括情感和痛苦的表达[1 - 3]。有两条运动通路控制面部运动[4 - 7]:一个皮质下锥体外系运动系统驱动所感受到的情感的自发面部表情,而一个皮质锥体系运动系统控制随意面部表情。锥体系使人类能够模拟并未实际体验到的情感的面部表情。他们的模拟非常成功,以至于能够欺骗大多数观察者[8 - 11]。然而,机器视觉或许能够通过识别锥体系和锥体外系驱动运动之间的细微差异,将欺骗性面部信号与真实面部信号区分开来。在此,我们表明,人类观察者辨别真实疼痛表情和伪装疼痛表情的能力并不比随机猜测更好,并且在对人类观察者进行训练后,我们将准确率适度提高到了55%。然而,一个自动测量面部运动并对这些运动进行模式识别的计算机视觉系统达到了85%的准确率。机器系统的优势归因于其区分真实表情和伪装表情动态的能力。因此,通过机器视觉系统揭示面部动作的动态,我们的方法有可能阐明参与情感信号传递的神经控制系统的行为特征。