Suppr超能文献

解释面部-语音匹配决策:口部运动、刺激效应和反应偏差的贡献。

Explaining face-voice matching decisions: The contribution of mouth movements, stimulus effects and response biases.

机构信息

Department of Biological and Experimental Psychology, School of Biological and Chemical Sciences, Queen Mary University of London, Mile End Road, London, E1 4NS, UK.

Department of Speech, Hearing and Phonetic Sciences, University College London, 2 Wakefield Street, London, WC1N 1PF, UK.

出版信息

Atten Percept Psychophys. 2021 Jul;83(5):2205-2216. doi: 10.3758/s13414-021-02290-5. Epub 2021 Apr 1.

Abstract

Previous studies have shown that face-voice matching accuracy is more consistently above chance for dynamic (i.e. speaking) faces than for static faces. This suggests that dynamic information can play an important role in informing matching decisions. We initially asked whether this advantage for dynamic stimuli is due to shared information across modalities that is encoded in articulatory mouth movements. Participants completed a sequential face-voice matching task with (1) static images of faces, (2) dynamic videos of faces, (3) dynamic videos where only the mouth was visible, and (4) dynamic videos where the mouth was occluded, in a well-controlled stimulus set. Surprisingly, after accounting for random variation in the data due to design choices, accuracy for all four conditions was at chance. Crucially, however, exploratory analyses revealed that participants were not responding randomly, with different patterns of response biases being apparent for different conditions. Our findings suggest that face-voice identity matching may not be possible with above-chance accuracy but that analyses of response biases can shed light upon how people attempt face-voice matching. We discuss these findings with reference to the differential functional roles for faces and voices recently proposed for multimodal person perception.

摘要

先前的研究表明,与静态面孔相比,动态(即说话)面孔的面孔-语音匹配准确率更一致地高于随机水平。这表明动态信息可以在告知匹配决策方面发挥重要作用。我们最初想知道,这种动态刺激物的优势是否归因于在发音口部运动中编码的跨模态共享信息。参与者在一个精心控制的刺激集中完成了一个顺序的面孔-语音匹配任务,包括(1)静态面孔图像,(2)动态面孔视频,(3)仅可见嘴巴的动态视频,以及(4)嘴巴被遮挡的动态视频。令人惊讶的是,在考虑了由于设计选择导致的数据随机变化后,所有四种条件的准确率都在随机水平。然而,至关重要的是,探索性分析表明,参与者的反应并非随机,不同条件下存在明显不同的反应偏差模式。我们的研究结果表明,面孔-语音身份匹配可能无法达到高于随机水平的准确率,但对反应偏差的分析可以揭示人们尝试面孔-语音匹配的方式。我们将这些发现与最近提出的用于多模态人物感知的面孔和声音的不同功能角色进行了讨论。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/b44e/8213568/20d03fb2c77c/13414_2021_2290_Fig1_HTML.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验