Department of Neuroscience and Biomedical Engineering, School of Science, Aalto University, FI-0076 Aalto, Finland.
Faculty of Social Sciences, Tampere University, FI-33014 Tampere, Finland.
Soc Cogn Affect Neurosci. 2020 Oct 8;15(8):803-813. doi: 10.1093/scan/nsaa110.
Human neuroimaging and behavioural studies suggest that somatomotor 'mirroring' of seen facial expressions may support their recognition. Here we show that viewing specific facial expressions triggers the representation corresponding to that expression in the observer's brain. Twelve healthy female volunteers underwent two separate fMRI sessions: one where they observed and another where they displayed three types of facial expressions (joy, anger and disgust). Pattern classifier based on Bayesian logistic regression was trained to classify facial expressions (i) within modality (trained and tested with data recorded while observing or displaying expressions) and (ii) between modalities (trained with data recorded while displaying expressions and tested with data recorded while observing the expressions). Cross-modal classification was performed in two ways: with and without functional realignment of the data across observing/displaying conditions. All expressions could be accurately classified within and also across modalities. Brain regions contributing most to cross-modal classification accuracy included primary motor and somatosensory cortices. Functional realignment led to only minor increases in cross-modal classification accuracy for most of the examined ROIs. Substantial improvement was observed in the occipito-ventral components of the core system for facial expression recognition. Altogether these results support the embodied emotion recognition model and show that expression-specific somatomotor neural signatures could support facial expression recognition.
人类神经影像学和行为研究表明,对所见面部表情的躯体感觉“镜像”可能有助于对面部表情的识别。在这里,我们表明,观看特定的面部表情会触发观察者大脑中与该表情相对应的表示。十二名健康的女性志愿者参加了两个单独的 fMRI 会议:一个是观察,另一个是展示三种面部表情(喜悦、愤怒和厌恶)。基于贝叶斯逻辑回归的模式分类器经过训练,可以对(i)同种模态(在观察或显示表情时记录的数据进行训练和测试)和(ii)不同模态(在显示表情时记录的数据进行训练,并在观察表情时记录的数据进行测试)下的面部表情进行分类。跨模态分类有两种方式:在观察/显示条件下的数据进行功能重新配准和不进行功能重新配准。所有表情都可以在同种模态和跨模态中准确分类。对跨模态分类准确性贡献最大的大脑区域包括初级运动和躯体感觉皮层。对于大多数检查的 ROI,功能重新配准仅导致跨模态分类准确性略有提高。在核心系统的枕叶-腹侧成分中观察到了显著的改进,用于面部表情识别。总之,这些结果支持了体现情感识别模型,并表明特定于表情的躯体感觉神经特征可以支持面部表情识别。