Suppr超能文献

基于面部选择性和运动敏感区域解码面部表情。

Decoding facial expressions based on face-selective and motion-sensitive areas.

作者信息

Liang Yin, Liu Baolin, Xu Junhai, Zhang Gaoyan, Li Xianglin, Wang Peiyuan, Wang Bin

机构信息

School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, 300350, People's Republic of China.

State Key Laboratory of Intelligent Technology and Systems, National Laboratory for Information Science and Technology, Tsinghua University, Beijing, 100084, People's Republic of China.

出版信息

Hum Brain Mapp. 2017 Jun;38(6):3113-3125. doi: 10.1002/hbm.23578. Epub 2017 Mar 27.

Abstract

Humans can easily recognize others' facial expressions. Among the brain substrates that enable this ability, considerable attention has been paid to face-selective areas; in contrast, whether motion-sensitive areas, which clearly exhibit sensitivity to facial movements, are involved in facial expression recognition remained unclear. The present functional magnetic resonance imaging (fMRI) study used multi-voxel pattern analysis (MVPA) to explore facial expression decoding in both face-selective and motion-sensitive areas. In a block design experiment, participants viewed facial expressions of six basic emotions (anger, disgust, fear, joy, sadness, and surprise) in images, videos, and eyes-obscured videos. Due to the use of multiple stimulus types, the impacts of facial motion and eye-related information on facial expression decoding were also examined. It was found that motion-sensitive areas showed significant responses to emotional expressions and that dynamic expressions could be successfully decoded in both face-selective and motion-sensitive areas. Compared with static stimuli, dynamic expressions elicited consistently higher neural responses and decoding performance in all regions. A significant decrease in both activation and decoding accuracy due to the absence of eye-related information was also observed. Overall, the findings showed that emotional expressions are represented in motion-sensitive areas in addition to conventional face-selective areas, suggesting that motion-sensitive regions may also effectively contribute to facial expression recognition. The results also suggested that facial motion and eye-related information played important roles by carrying considerable expression information that could facilitate facial expression recognition. Hum Brain Mapp 38:3113-3125, 2017. © 2017 Wiley Periodicals, Inc.

摘要

人类能够轻松识别他人的面部表情。在促成这种能力的大脑基质中,对面部选择性区域给予了相当多的关注;相比之下,对面部动作明显表现出敏感性的运动敏感区域是否参与面部表情识别仍不清楚。目前的功能磁共振成像(fMRI)研究使用多体素模式分析(MVPA)来探索面部选择性区域和运动敏感区域中的面部表情解码。在一个组块设计实验中,参与者观看了六种基本情绪(愤怒、厌恶、恐惧、喜悦、悲伤和惊讶)的面部表情,这些表情呈现于图像、视频以及遮挡眼睛的视频中。由于使用了多种刺激类型,还研究了面部动作和与眼睛相关的信息对面部表情解码的影响。研究发现,运动敏感区域对情绪表情表现出显著反应,并且动态表情能够在面部选择性区域和运动敏感区域中成功解码。与静态刺激相比,动态表情在所有区域均引发了持续更高的神经反应和解码性能。还观察到由于缺少与眼睛相关的信息,激活和解码准确性均显著下降。总体而言,研究结果表明,除了传统的面部选择性区域外,情绪表情也在运动敏感区域中得到表征,这表明运动敏感区域可能也有效地促进了面部表情识别。结果还表明,面部动作和与眼睛相关的信息通过携带大量有助于面部表情识别的表情信息发挥了重要作用。《人类大脑图谱》38:3113 - 3125,2017年。© 2017威利期刊公司。

相似文献

引用本文的文献

7
Decoding six basic emotions from brain functional connectivity patterns.从大脑功能连接模式中解码六种基本情绪。
Sci China Life Sci. 2023 Apr;66(4):835-847. doi: 10.1007/s11427-022-2206-3. Epub 2022 Nov 11.

本文引用的文献

9
Multivoxel pattern analysis for FMRI data: a review.多体素模式分析在 fMRI 数据中的应用:综述。
Comput Math Methods Med. 2012;2012:961257. doi: 10.1155/2012/961257. Epub 2012 Dec 6.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验