Suppr超能文献

基于可信度印象中线索显著性对面部和声音进行自动称重。

Automating weighing of faces and voices based on cue saliency in trustworthiness impressions.

机构信息

Institute of Psychology, Leiden University, Leiden, The Netherlands.

Psychology of Language, Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands.

出版信息

Sci Rep. 2023 Nov 16;13(1):20037. doi: 10.1038/s41598-023-45471-y.

Abstract

When encountering people, their faces are usually paired with their voices. We know that if the face looks familiar, and the voice is high-pitched, the first impression will be positive and trustworthy. But, how do we integrate these two multisensory physical attributes? Here, we explore 1) the automaticity of audiovisual integration in shaping first impressions of trustworthiness, and 2) the relative contribution of each modality in the final judgment. We find that, even though participants can focus their attention on one modality to judge trustworthiness, they fail to completely filter out the other modality for both faces (Experiment 1a) and voices (Experiment 1b). When asked to judge the person as a whole, people rely more on voices (Experiment 2) or faces (Experiment 3). We link this change to the distinctiveness of each cue in the stimulus set rather than a general property of the modality. Overall, we find that people weigh faces and voices automatically based on cue saliency when forming trustworthiness impressions.

摘要

当与人交往时,我们通常会同时看到他们的脸和听到他们的声音。我们知道,如果一个人的脸看起来熟悉,而他的声音又很高亢,那么第一印象通常是积极且值得信任的。但是,我们如何将这两种多感官的身体属性结合起来呢?在这里,我们探讨了 1)视听整合在形成信任度第一印象中的自动性,以及 2)在最终判断中每种模态的相对贡献。我们发现,即使参与者可以将注意力集中在一种模态上判断可信度,他们也无法完全过滤掉另一种模态的信息,无论是在面部(实验 1a)还是在声音(实验 1b)上。当被要求对整个人进行判断时,人们更多地依赖于声音(实验 2)或面孔(实验 3)。我们将这种变化归因于刺激集中每个线索的独特性,而不是模态的一般属性。总的来说,我们发现,当人们形成可信度印象时,会根据线索的显著性自动权衡面孔和声音。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/96f5/10654569/2f2ed89d1259/41598_2023_45471_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验