Schyns Philippe G, Petro Lucy S, Smith Marie L
Centre for Cognitive Neuroimaging (CCNi), University of Glasgow, Glasgow, United Kingdom.
PLoS One. 2009 May 20;4(5):e5625. doi: 10.1371/journal.pone.0005625.
Competent social organisms will read the social signals of their peers. In primates, the face has evolved to transmit the organism's internal emotional state. Adaptive action suggests that the brain of the receiver has co-evolved to efficiently decode expression signals. Here, we review and integrate the evidence for this hypothesis. With a computational approach, we co-examined facial expressions as signals for data transmission and the brain as receiver and decoder of these signals. First, we show in a model observer that facial expressions form a lowly correlated signal set. Second, using time-resolved EEG data, we show how the brain uses spatial frequency information impinging on the retina to decorrelate expression categories. Between 140 to 200 ms following stimulus onset, independently in the left and right hemispheres, an information processing mechanism starts locally with encoding the eye, irrespective of expression, followed by a zooming out to processing the entire face, followed by a zooming back in to diagnostic features (e.g. the opened eyes in "fear", the mouth in "happy"). A model categorizer demonstrates that at 200 ms, the left and right brain have represented enough information to predict behavioral categorization performance.
有能力的社会生物会读取同伴的社会信号。在灵长类动物中,面部已经进化到能够传递生物的内部情绪状态。适应性行为表明,接收者的大脑已经共同进化,以便有效地解码表情信号。在这里,我们回顾并整合了这一假设的证据。通过计算方法,我们共同研究了面部表情作为数据传输的信号,以及大脑作为这些信号的接收者和解码器。首先,我们在一个模型观察者中表明,面部表情形成了一个低相关性的信号集。其次,使用时间分辨脑电图数据,我们展示了大脑如何利用撞击视网膜的空间频率信息来使表情类别去相关。在刺激开始后的140到200毫秒之间,在左右半球独立地,一种信息处理机制首先从编码眼睛开始,与表情无关,然后向外扩展到处理整个面部,接着再向内聚焦到诊断特征(例如“恐惧”时睁开的眼睛,“高兴”时的嘴巴)。一个模型分类器表明,在200毫秒时,左右脑已经代表了足够的信息来预测行为分类表现。