Suppr超能文献

韵律在情绪词汇加工中占据主导地位:来自跨通道和跨模态斯特鲁普效应的证据。

Prosody Dominates Over Semantics in Emotion Word Processing: Evidence From Cross-Channel and Cross-Modal Stroop Effects.

机构信息

Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China.

Department of Speech-Language-Hearing Science & Center for Neurobehavioral Development, University of Minnesota, Minneapolis.

出版信息

J Speech Lang Hear Res. 2020 Mar 23;63(3):896-912. doi: 10.1044/2020_JSLHR-19-00258. Epub 2020 Mar 18.

Abstract

Purpose Emotional speech communication involves multisensory integration of linguistic (e.g., semantic content) and paralinguistic (e.g., prosody and facial expressions) messages. Previous studies on linguistic versus paralinguistic salience effects in emotional speech processing have produced inconsistent findings. In this study, we investigated the relative perceptual saliency of emotion cues in cross-channel auditory alone task (i.e., semantics-prosody Stroop task) and cross-modal audiovisual task (i.e., semantics-prosody-face Stroop task). Method Thirty normal Chinese adults participated in two Stroop experiments with spoken emotion adjectives in Mandarin Chinese. Experiment 1 manipulated auditory pairing of emotional prosody (happy or sad) and lexical semantic content in congruent and incongruent conditions. Experiment 2 extended the protocol to cross-modal integration by introducing visual facial expression during auditory stimulus presentation. Participants were asked to judge emotional information for each test trial according to the instruction of selective attention. Results Accuracy and reaction time data indicated that, despite an increase in cognitive demand and task complexity in Experiment 2, prosody was consistently more salient than semantic content for emotion word processing and did not take precedence over facial expression. While congruent stimuli enhanced performance in both experiments, the facilitatory effect was smaller in Experiment 2. Conclusion Together, the results demonstrate the salient role of paralinguistic prosodic cues in emotion word processing and congruence facilitation effect in multisensory integration. Our study contributes tonal language data on how linguistic and paralinguistic messages converge in multisensory speech processing and lays a foundation for further exploring the brain mechanisms of cross-channel/modal emotion integration with potential clinical applications.

摘要

目的 情感言语交流涉及语言(例如语义内容)和副语言(例如韵律和面部表情)信息的多感觉整合。先前关于情感言语处理中语言与副语言突显效应的研究得出了不一致的结果。在这项研究中,我们调查了跨通道听觉任务(即语义-韵律 Stroop 任务)和跨模态视听任务(即语义-韵律-面孔 Stroop 任务)中情绪线索的相对感知显著性。

方法 30 名正常中国成年人参与了两项使用普通话口语情感形容词的 Stroop 实验。实验 1 操纵了情感韵律(高兴或悲伤)和词汇语义内容在一致和不一致条件下的听觉配对。实验 2 通过在听觉刺激呈现期间引入视觉面部表情,扩展了跨模态整合的协议。参与者根据选择性注意的指令,对每个测试试验的情绪信息进行判断。

结果 准确性和反应时间数据表明,尽管在实验 2 中认知需求和任务复杂性增加,但韵律对于情感词处理始终比语义内容更显著,并且不优先于面部表情。尽管在两个实验中一致的刺激都增强了表现,但在实验 2 中促进作用较小。

结论 综上所述,这些结果证明了副语言韵律线索在情感词处理中的突出作用以及多感觉整合中的一致性促进效应。我们的研究为进一步探索跨通道/模态情感整合的大脑机制提供了基础,并具有潜在的临床应用价值。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验