Liang Yin, Liu Baolin, Xu Junhai, Zhang Gaoyan, Li Xianglin, Wang Peiyuan, Wang Bin
School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, 300350, People's Republic of China.
State Key Laboratory of Intelligent Technology and Systems, National Laboratory for Information Science and Technology, Tsinghua University, Beijing, 100084, People's Republic of China.
Hum Brain Mapp. 2017 Jun;38(6):3113-3125. doi: 10.1002/hbm.23578. Epub 2017 Mar 27.
Humans can easily recognize others' facial expressions. Among the brain substrates that enable this ability, considerable attention has been paid to face-selective areas; in contrast, whether motion-sensitive areas, which clearly exhibit sensitivity to facial movements, are involved in facial expression recognition remained unclear. The present functional magnetic resonance imaging (fMRI) study used multi-voxel pattern analysis (MVPA) to explore facial expression decoding in both face-selective and motion-sensitive areas. In a block design experiment, participants viewed facial expressions of six basic emotions (anger, disgust, fear, joy, sadness, and surprise) in images, videos, and eyes-obscured videos. Due to the use of multiple stimulus types, the impacts of facial motion and eye-related information on facial expression decoding were also examined. It was found that motion-sensitive areas showed significant responses to emotional expressions and that dynamic expressions could be successfully decoded in both face-selective and motion-sensitive areas. Compared with static stimuli, dynamic expressions elicited consistently higher neural responses and decoding performance in all regions. A significant decrease in both activation and decoding accuracy due to the absence of eye-related information was also observed. Overall, the findings showed that emotional expressions are represented in motion-sensitive areas in addition to conventional face-selective areas, suggesting that motion-sensitive regions may also effectively contribute to facial expression recognition. The results also suggested that facial motion and eye-related information played important roles by carrying considerable expression information that could facilitate facial expression recognition. Hum Brain Mapp 38:3113-3125, 2017. © 2017 Wiley Periodicals, Inc.
人类能够轻松识别他人的面部表情。在促成这种能力的大脑基质中,对面部选择性区域给予了相当多的关注;相比之下,对面部动作明显表现出敏感性的运动敏感区域是否参与面部表情识别仍不清楚。目前的功能磁共振成像(fMRI)研究使用多体素模式分析(MVPA)来探索面部选择性区域和运动敏感区域中的面部表情解码。在一个组块设计实验中,参与者观看了六种基本情绪(愤怒、厌恶、恐惧、喜悦、悲伤和惊讶)的面部表情,这些表情呈现于图像、视频以及遮挡眼睛的视频中。由于使用了多种刺激类型,还研究了面部动作和与眼睛相关的信息对面部表情解码的影响。研究发现,运动敏感区域对情绪表情表现出显著反应,并且动态表情能够在面部选择性区域和运动敏感区域中成功解码。与静态刺激相比,动态表情在所有区域均引发了持续更高的神经反应和解码性能。还观察到由于缺少与眼睛相关的信息,激活和解码准确性均显著下降。总体而言,研究结果表明,除了传统的面部选择性区域外,情绪表情也在运动敏感区域中得到表征,这表明运动敏感区域可能也有效地促进了面部表情识别。结果还表明,面部动作和与眼睛相关的信息通过携带大量有助于面部表情识别的表情信息发挥了重要作用。《人类大脑图谱》38:3113 - 3125,2017年。© 2017威利期刊公司。