猴脑中运动敏感区域解码的动态和静态面部表情。
Dynamic and static facial expressions decoded from motion-sensitive areas in the macaque monkey.
机构信息
Laboratories of Neuropsychology and Brain and Cognition, NIMH/NIH, Bethesda, Maryland 20892, USA.
出版信息
J Neurosci. 2012 Nov 7;32(45):15952-62. doi: 10.1523/JNEUROSCI.1992-12.2012.
Humans adeptly use visual motion to recognize socially relevant facial information. The macaque provides a model visual system for studying neural coding of expression movements, as its superior temporal sulcus (STS) possesses brain areas selective for faces and areas sensitive to visual motion. We used functional magnetic resonance imaging and facial stimuli to localize motion-sensitive areas [motion in faces (Mf) areas], which responded more to dynamic faces compared with static faces, and face-selective areas, which responded selectively to faces compared with objects and places. Using multivariate analysis, we found that information about both dynamic and static facial expressions could be robustly decoded from Mf areas. By contrast, face-selective areas exhibited relatively less facial expression information. Classifiers trained with expressions from one motion type (dynamic or static) showed poor generalization to the other motion type, suggesting that Mf areas employ separate and nonconfusable neural codes for dynamic and static presentations of the same expressions. We also show that some of the motion sensitivity elicited by facial stimuli was not specific to faces but could also be elicited by moving dots, particularly in fundus of the superior temporal and middle superior temporal polysensory/lower superior temporal areas, confirming their already well established low-level motion sensitivity. A different pattern was found in anterior STS, which responded more to dynamic than to static faces but was not sensitive to dot motion. Overall, we show that emotional expressions are mostly represented outside of face-selective cortex, in areas sensitive to motion. These regions may play a fundamental role in enhancing recognition of facial expression despite the complex stimulus changes associated with motion.
人类能够熟练地利用视觉运动来识别与社交相关的面部信息。猕猴为研究表情运动的神经编码提供了一个模型视觉系统,因为其颞上沟(STS)具有选择性地针对面部的脑区和对视觉运动敏感的脑区。我们使用功能磁共振成像和面部刺激物来定位对运动敏感的区域[运动面孔(Mf)区域],这些区域对动态面孔的反应比对静态面孔的反应更强烈,并且对选择性地针对面孔的区域比针对物体和地点的区域的反应更强烈。通过多元分析,我们发现,来自 Mf 区域的信息可以可靠地解码动态和静态面部表情的信息。相比之下,选择性地针对面孔的区域表现出相对较少的面部表情信息。使用一种运动类型(动态或静态)的表情训练的分类器对另一种运动类型的表情的泛化能力较差,这表明 Mf 区域为相同表情的动态和静态呈现采用了独立且不可混淆的神经编码。我们还表明,由面部刺激引起的一些运动敏感性不仅针对面孔,还可以由运动点引起,特别是在颞上和中颞上多感觉/下颞上区域的底部,这证实了它们已经确立的低级运动敏感性。在前颞上回中发现了不同的模式,该区域对动态面孔的反应比对静态面孔的反应更强烈,但对点运动不敏感。总体而言,我们表明情绪表达主要在选择性地针对面孔的皮层之外的区域被表示,这些区域对运动敏感。这些区域可能在增强对面部表情的识别方面发挥着基本作用,尽管与运动相关联的刺激变化很复杂。