Srinivasan Ramprakash, Golomb Julie D, Martinez Aleix M
Ohio State University, Columbus, Ohio 43210.
Ohio State University, Columbus, Ohio 43210
J Neurosci. 2016 Apr 20;36(16):4434-42. doi: 10.1523/JNEUROSCI.1704-15.2016.
By combining different facial muscle actions, called action units, humans can produce an extraordinarily large number of facial expressions. Computational models and studies in cognitive science and social psychology have long hypothesized that the brain needs to visually interpret these action units to understand other people's actions and intentions. Surprisingly, no studies have identified the neural basis of the visual recognition of these action units. Here, using functional magnetic resonance imaging and an innovative machine learning analysis approach, we identify a consistent and differential coding of action units in the brain. Crucially, in a brain region thought to be responsible for the processing of changeable aspects of the face, multivoxel pattern analysis could decode the presence of specific action units in an image. This coding was found to be consistent across people, facilitating the estimation of the perceived action units on participants not used to train the multivoxel decoder. Furthermore, this coding of action units was identified when participants attended to the emotion category of the facial expression, suggesting an interaction between the visual analysis of action units and emotion categorization as predicted by the computational models mentioned above. These results provide the first evidence for a representation of action units in the brain and suggest a mechanism for the analysis of large numbers of facial actions and a loss of this capacity in psychopathologies.
Computational models and studies in cognitive and social psychology propound that visual recognition of facial expressions requires an intermediate step to identify visible facial changes caused by the movement of specific facial muscles. Because facial expressions are indeed created by moving one's facial muscles, it is logical to assume that our visual system solves this inverse problem. Here, using an innovative machine learning method and neuroimaging data, we identify for the first time a brain region responsible for the recognition of actions associated with specific facial muscles. Furthermore, this representation is preserved across subjects. Our machine learning analysis does not require mapping the data to a standard brain and may serve as an alternative to hyperalignment.
通过组合不同的面部肌肉动作(称为动作单元),人类能够产生数量极其庞大的面部表情。计算模型以及认知科学和社会心理学领域的研究长期以来一直假设,大脑需要对视这些动作单元进行视觉解读,以理解他人的动作和意图。令人惊讶的是,尚无研究确定这些动作单元视觉识别的神经基础。在此,我们利用功能磁共振成像和一种创新的机器学习分析方法,识别出大脑中动作单元的一致且有差异的编码。至关重要的是,在一个被认为负责处理面部可变方面的脑区,多体素模式分析能够解码图像中特定动作单元的存在。发现这种编码在个体间是一致的,有助于对未用于训练多体素解码器的参与者所感知到的动作单元进行估计。此外,当参与者关注面部表情的情感类别时,识别出了这种动作单元编码,这表明动作单元的视觉分析与情感分类之间存在相互作用,正如上述计算模型所预测的那样。这些结果为大脑中动作单元的表征提供了首个证据,并提出了一种分析大量面部动作的机制以及精神病理学中这种能力的丧失。
计算模型以及认知和社会心理学领域的研究提出,面部表情的视觉识别需要一个中间步骤来识别由特定面部肌肉运动引起的可见面部变化。由于面部表情确实是通过移动面部肌肉产生的,所以合理推测我们的视觉系统解决了这个逆向问题。在此,我们使用创新的机器学习方法和神经成像数据,首次识别出一个负责识别与特定面部肌肉相关动作的脑区。此外,这种表征在个体间是保留的。我们的机器学习分析不需要将数据映射到标准脑图,并且可以作为超对齐的替代方法。