Suppr超能文献

情绪表达的跨模态整合是否独立于注意力资源?

Is cross-modal integration of emotional expressions independent of attentional resources?

作者信息

Vroomen J, Driver J, de Gelder B

机构信息

Tilburg University, Department of Psychology, P.O. Box 90153, 5000 LE, Tilburg, The Netherlands.

出版信息

Cogn Affect Behav Neurosci. 2001 Dec;1(4):382-7. doi: 10.3758/cabn.1.4.382.

Abstract

In this study, we examined whether integration of visual and auditory information about emotions requires limited attentional resources. Subjects judged whether a voice expressed happiness or fear, while trying to ignore a concurrently presented static facial expression. As an additional task, the subjects had to add two numbers together rapidly (Experiment 1), count the occurrences of a target digit in a rapid serial visual presentation (Experiment 2), or judge the pitch of a tone as high or low (Experiment 3). The visible face had an impact on judgments of the emotion of the heard voice in all the experiments. This cross-modal effect was independent of whether or not the subjects performed a demanding additional task. This suggests that integration of visual and auditory information about emotions may be a mandatory process, unconstrained by attentional resources.

摘要

在本研究中,我们考察了关于情绪的视觉与听觉信息整合是否需要有限的注意力资源。受试者判断一个声音表达的是快乐还是恐惧,同时试图忽略同时呈现的静态面部表情。作为一项额外任务,受试者必须快速将两个数字相加(实验1),在快速序列视觉呈现中数出目标数字的出现次数(实验2),或者判断一个音调是高还是低(实验3)。在所有实验中,可见的面部都对所听到声音的情绪判断产生了影响。这种跨模态效应与受试者是否执行一项要求较高的额外任务无关。这表明关于情绪的视觉与听觉信息整合可能是一个强制性过程,不受注意力资源的限制。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验