Suppr超能文献

观察者与被观察对象面部肌肉激活之间的不一致会降低对视频刺激中面部表情的识别。

Incongruence Between Observers' and Observed Facial Muscle Activation Reduces Recognition of Emotional Facial Expressions From Video Stimuli.

作者信息

Wingenbach Tanja S H, Brosnan Mark, Pfaltz Monique C, Plichta Michael M, Ashwin Chris

机构信息

Centre for Applied Autism Research, Department of Psychology, University of Bath, Bath, United Kingdom.

Social and Cognitive Neuroscience Laboratory, Centre of Biology and Health Sciences, Mackenzie Presbyterian University, São Paulo, Brazil.

出版信息

Front Psychol. 2018 Jun 6;9:864. doi: 10.3389/fpsyg.2018.00864. eCollection 2018.

Abstract

According to embodied cognition accounts, viewing others' facial emotion can elicit the respective emotion representation in observers which entails simulations of sensory, motor, and contextual experiences. In line with that, published research found viewing others' facial emotion to elicit automatic matched facial muscle activation, which was further found to facilitate emotion recognition. Perhaps making congruent facial muscle activity explicit produces an even greater recognition advantage. If there is conflicting sensory information, i.e., incongruent facial muscle activity, this might impede recognition. The effects of actively manipulating facial muscle activity on facial emotion recognition from videos were investigated across three experimental conditions: (a) explicit imitation of viewed facial emotional expressions (stimulus-congruent condition), (b) pen-holding with the lips (stimulus-incongruent condition), and (c) passive viewing (control condition). It was hypothesised that (1) experimental condition (a) and (b) result in greater facial muscle activity than (c), (2) experimental condition (a) increases emotion recognition accuracy from others' faces compared to (c), (3) experimental condition (b) lowers recognition accuracy for expressions with a salient facial feature in the lower, but not the upper face area, compared to (c). Participants (42 males, 42 females) underwent a facial emotion recognition experiment (ADFES-BIV) while electromyography (EMG) was recorded from five facial muscle sites. The experimental conditions' order was counter-balanced. Pen-holding caused stimulus-incongruent facial muscle activity for expressions with facial feature saliency in the lower face region, which reduced recognition of lower face region emotions. Explicit imitation caused stimulus-congruent facial muscle activity without modulating recognition. Methodological implications are discussed.

摘要

根据具身认知理论,观察他人的面部表情能够在观察者身上引发相应的情绪表征,这需要对感觉、运动和情境体验进行模拟。与此相符的是,已发表的研究发现,观察他人的面部表情会引发自动匹配的面部肌肉激活,进而发现这有助于情绪识别。或许使一致的面部肌肉活动变得明确会产生更大的识别优势。如果存在相互冲突的感觉信息,即不一致的面部肌肉活动,这可能会阻碍识别。本研究通过三个实验条件,对主动操纵面部肌肉活动对视频中面部表情识别的影响进行了调查:(a)明确模仿观察到的面部表情(刺激一致条件),(b)用嘴唇叼着笔(刺激不一致条件),以及(c)被动观看(对照条件)。研究假设如下:(1)实验条件(a)和(b)比(c)会导致更大的面部肌肉活动;(2)与(c)相比,实验条件(a)会提高从他人面部识别情绪的准确性;(3)与(c)相比,实验条件(b)会降低在下半面部区域(而非上半面部区域)具有显著面部特征的表情的识别准确性。参与者(42名男性,42名女性)接受了面部表情识别实验(ADFES - BIV),同时从五个面部肌肉部位记录肌电图(EMG)。实验条件的顺序进行了平衡处理。叼笔会导致在下半面部区域具有面部特征显著性的表情出现刺激不一致的面部肌肉活动,这降低了对下半面部区域情绪的识别。明确模仿会导致刺激一致的面部肌肉活动,但不会调节识别。文中讨论了方法学意义。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd63/5997820/75c9b13a8a65/fpsyg-09-00864-g001.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验