Liang Yin, Liu Baolin, Li Xianglin, Wang Peiyuan
School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, China.
State Key Laboratory of Intelligent Technology and Systems, National Laboratory for Information Science and Technology, Tsinghua University, Beijing, China.
Front Hum Neurosci. 2018 Mar 19;12:94. doi: 10.3389/fnhum.2018.00094. eCollection 2018.
It is an important question how human beings achieve efficient recognition of others' facial expressions in cognitive neuroscience, and it has been identified that specific cortical regions show preferential activation to facial expressions in previous studies. However, the potential contributions of the connectivity patterns in the processing of facial expressions remained unclear. The present functional magnetic resonance imaging (fMRI) study explored whether facial expressions could be decoded from the functional connectivity (FC) patterns using multivariate pattern analysis combined with machine learning algorithms (fcMVPA). We employed a block design experiment and collected neural activities while participants viewed facial expressions of six basic emotions (anger, disgust, fear, joy, sadness, and surprise). Both static and dynamic expression stimuli were included in our study. A behavioral experiment after scanning confirmed the validity of the facial stimuli presented during the fMRI experiment with classification accuracies and emotional intensities. We obtained whole-brain FC patterns for each facial expression and found that both static and dynamic facial expressions could be successfully decoded from the FC patterns. Moreover, we identified the expression-discriminative networks for the static and dynamic facial expressions, which span beyond the conventional face-selective areas. Overall, these results reveal that large-scale FC patterns may also contain rich expression information to accurately decode facial expressions, suggesting a novel mechanism, which includes general interactions between distributed brain regions, and that contributes to the human facial expression recognition.
在认知神经科学中,人类如何实现对他人面部表情的有效识别是一个重要问题,并且在先前的研究中已经确定特定的皮质区域对面部表情表现出优先激活。然而,面部表情处理中连接模式的潜在贡献仍不清楚。本功能磁共振成像(fMRI)研究使用多变量模式分析结合机器学习算法(fcMVPA)探索了是否可以从功能连接(FC)模式中解码面部表情。我们采用了块设计实验,并在参与者观看六种基本情绪(愤怒、厌恶、恐惧、喜悦、悲伤和惊讶)的面部表情时收集神经活动。我们的研究中包括了静态和动态表情刺激。扫描后的行为实验通过分类准确率和情绪强度证实了fMRI实验中呈现的面部刺激的有效性。我们获得了每种面部表情的全脑FC模式,并发现静态和动态面部表情都可以从FC模式中成功解码。此外,我们确定了静态和动态面部表情的表达区分网络,这些网络超出了传统的面部选择区域。总体而言,这些结果表明大规模FC模式可能也包含丰富的表达信息以准确解码面部表情,这提示了一种新机制,包括分布式脑区之间的一般相互作用,并且有助于人类面部表情识别。