Ross Paddy, Atkinson Anthony P
Department of Psychology, Durham University, Durham, United Kingdom.
Front Psychol. 2020 Mar 3;11:309. doi: 10.3389/fpsyg.2020.00309. eCollection 2020.
Recent models of emotion recognition suggest that when people perceive an emotional expression, they partially activate the respective emotion in themselves, providing a basis for the recognition of that emotion. Much of the focus of these models and of their evidential basis has been on sensorimotor simulation as a basis for facial expression recognition - the idea, in short, that coming to know what another feels involves simulating in your brain the motor plans and associated sensory representations engaged by the other person's brain in producing the facial expression that you see. In this review article, we argue that simulation accounts of emotion recognition would benefit from three key extensions. First, that fuller consideration be given to simulation of bodily and vocal expressions, given that the body and voice are also important expressive channels for providing cues to another's emotional state. Second, that simulation of other aspects of the perceived emotional state, such as changes in the autonomic nervous system and viscera, might have a more prominent role in underpinning emotion recognition than is typically proposed. Sensorimotor simulation models tend to relegate such body-state simulation to a subsidiary role, despite the plausibility of body-state simulation being able to underpin emotion recognition in the absence of typical sensorimotor simulation. Third, that simulation models of emotion recognition be extended to address how embodied processes and emotion recognition abilities develop through the lifespan. It is not currently clear how this system of sensorimotor and body-state simulation develops and in particular how this affects the development of emotion recognition ability. We review recent findings from the emotional body recognition literature and integrate recent evidence regarding the development of mimicry and interoception to significantly expand simulation models of emotion recognition.
近期的情绪识别模型表明,当人们感知到一种情绪表达时,他们会在自身部分激活相应的情绪,这为识别该情绪提供了基础。这些模型及其证据基础的大部分焦点都集中在感觉运动模拟作为面部表情识别的基础上——简而言之,即了解他人的感受涉及在你的大脑中模拟他人大脑在产生你所看到的面部表情时所参与的运动计划和相关的感觉表征。在这篇综述文章中,我们认为情绪识别的模拟理论将从三个关键扩展中受益。首先,鉴于身体和声音也是提供他人情绪状态线索的重要表达渠道,应更充分地考虑对身体和声音表达的模拟。其次,对所感知情绪状态的其他方面的模拟,如自主神经系统和内脏的变化,在支持情绪识别方面可能比通常认为的发挥更突出的作用。感觉运动模拟模型往往将这种身体状态模拟置于次要地位,尽管身体状态模拟在没有典型感觉运动模拟的情况下能够支持情绪识别这一观点是合理的。第三,情绪识别的模拟模型应扩展以解决身体过程和情绪识别能力如何在整个生命周期中发展的问题。目前尚不清楚这种感觉运动和身体状态模拟系统是如何发展的,特别是这如何影响情绪识别能力的发展。我们回顾了来自情绪身体识别文献的最新发现,并整合了关于模仿和内感受发展的最新证据,以显著扩展情绪识别的模拟模型。