Suppr超能文献

情绪状态依赖性促进视觉言语的自动模仿。

Emotional state dependence facilitates automatic imitation of visual speech.

作者信息

Virhia Jasmine, Kotz Sonja A, Adank Patti

机构信息

Department of Psychology, Royal Holloway, University of London, Egham, UK.

Department of Neuropsychology and Psychopharmacology, Maastricht University, Maastricht, The Netherlands.

出版信息

Q J Exp Psychol (Hove). 2019 Dec;72(12):2833-2847. doi: 10.1177/1747021819867856. Epub 2019 Aug 30.

Abstract

Observing someone speak automatically triggers cognitive and neural mechanisms required to produce speech, a phenomenon known as . Automatic imitation of speech can be measured using the Stimulus Response Compatibility (SRC) paradigm that shows facilitated response times (RTs) when responding to a prompt (e.g., say ) in the presence of a congruent distracter (a video of someone saying ), compared with responding in the presence of an incongruent distracter (a video of someone saying ). Current models of the relation between emotion and cognitive control suggest that automatic imitation can be modulated by varying the stimulus-driven task aspects, that is, the distracter's emotional valence. It is unclear how the emotional state of the observer affects automatic imitation. The current study explored independent effects of emotional valence of the distracter (Stimulus-driven Dependence) and the observer's emotional state (State Dependence) on automatic imitation of speech. Participants completed an SRC paradigm for visual speech stimuli. They produced a prompt superimposed over a neutral or emotional (happy or angry) distracter video. State Dependence was manipulated by asking participants to speak the prompt in a neutral or emotional (happy or angry) voice. Automatic imitation was facilitated for emotional prompts, but not for emotional distracters, thus implying a facilitating effect of State Dependence. The results are interpreted in the context of theories of automatic imitation and cognitive control, and we suggest that models of automatic imitation are to be modified to accommodate for state-dependent and stimulus-driven dependent effects.

摘要

观察他人说话会自动触发产生言语所需的认知和神经机制,这一现象被称为 。言语的自动模仿可以使用刺激反应相容性(SRC)范式来测量,该范式表明,在存在一致干扰物(有人说话的视频)的情况下对提示(例如,说 )做出反应时,与在存在不一致干扰物(有人说 的视频)的情况下做出反应相比,反应时间(RTs)会加快。当前关于情绪与认知控制关系的模型表明,自动模仿可以通过改变刺激驱动的任务方面来调节,即干扰物的情绪效价。尚不清楚观察者的情绪状态如何影响自动模仿。当前的研究探讨了干扰物的情绪效价(刺激驱动依赖性)和观察者的情绪状态(状态依赖性)对言语自动模仿的独立影响。参与者完成了针对视觉言语刺激的SRC范式。他们说出叠加在中性或情绪性(高兴或愤怒)干扰物视频上的提示。通过要求参与者用中性或情绪性(高兴或愤怒)的声音说出提示来操纵状态依赖性。情绪性提示促进了自动模仿,但情绪性干扰物则没有,因此暗示了状态依赖性的促进作用。在自动模仿和认知控制理论的背景下对结果进行了解释,并且我们建议修改自动模仿模型以适应状态依赖性和刺激驱动依赖性效应。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验