• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

先前的预期在面对面交流过程中引导多感官整合。

Prior expectations guide multisensory integration during face-to-face communication.

作者信息

Mazzi Giulia, Ferrari Ambra, Mencaroni Maria Laura, Valzolgher Chiara, Tommasini Mirko, Pavani Francesco, Benetti Stefania

机构信息

Center for Mind/Brain Sciences (CIMeC), University of Trento, Rovereto, Italy.

Max Plank Institute for Psycholinguistics, Nijmegen, The Netherlands.

出版信息

PLoS Comput Biol. 2025 Sep 12;21(9):e1013468. doi: 10.1371/journal.pcbi.1013468. eCollection 2025 Sep.

DOI:10.1371/journal.pcbi.1013468
PMID:40939006
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12448992/
Abstract

Face-to-face communication relies on the seamless integration of multisensory signals, including voice, gaze, and head movements, to convey meaning effectively. This poses a fundamental computational challenge: optimally binding signals sharing the same communicative intention (e.g., looking at the addressee while speaking) and segregating unrelated signals (e.g., looking away while coughing), all within the rapid turn-taking dynamics of conversation. Critically, the computational mechanisms underlying this extraordinary feat remain largely unknown. Here, we cast face-to-face communication as a Bayesian Causal Inference problem to formally test whether prior expectations arbitrate between the integration and segregation of vocal and bodily signals. Specifically, we asked whether there is a stronger prior tendency to integrate audiovisual signals that convey the same communicative intention, thus establishing a crossmodal pragmatic correspondence. Additionally, we evaluated whether observers solve causal inference by adopting optimal Bayesian decision strategies or non-optimal approximate heuristics. In a spatial localization task, participants watched audiovisual clips of a speaker where the audio (voice) and the video (bodily cues) were sampled either from congruent positions or at increasing spatial disparities. Crucially, we manipulated the pragmatic correspondence of the signals: in a communicative condition, the speaker addressed the participant with their head, gaze and speech; in a non-communicative condition, the speaker kept the head down and produced a meaningless vocalization. We measured audiovisual integration through the ventriloquist effect, which quantifies how much the perceived audio position is misplaced towards the video position. Combining psychophysics with computational modelling, we show that observers solved audiovisual causal inference using non-optimal heuristics that nevertheless approximate optimal Bayesian inference with high accuracy. Remarkably, participants showed a stronger tendency to integrate vocal and bodily information when signals conveyed congruent communicative intent, suggesting that pragmatic correspondences enhance multisensory integration. Collectively, our findings provide novel and compelling evidence that face-to-face communication is shaped by deeply ingrained expectations about how multisensory signals should be structured and interpreted.

摘要

面对面交流依赖于多感官信号的无缝整合,包括语音、目光和头部动作,以便有效地传达意义。这带来了一个基本的计算挑战:在对话快速的轮流交替动态中,最佳地绑定具有相同交流意图的信号(例如,说话时看着对方),并区分不相关的信号(例如,咳嗽时看向别处)。至关重要的是,这一非凡能力背后的计算机制在很大程度上仍然未知。在这里,我们将面对面交流视为一个贝叶斯因果推理问题,以正式测试先验期望是否在声音和身体信号的整合与区分之间起仲裁作用。具体而言,我们询问是否存在更强的先验倾向来整合传达相同交流意图的视听信号,从而建立跨模态语用对应关系。此外,我们评估了观察者是通过采用最优贝叶斯决策策略还是非最优近似启发式方法来解决因果推理问题。在一个空间定位任务中,参与者观看说话者的视听片段,其中音频(语音)和视频(身体线索)要么从一致的位置采样,要么在空间差异不断增大的情况下采样。关键的是,我们操纵了信号的语用对应关系:在交流条件下,说话者用头部、目光和言语与参与者交流;在非交流条件下,说话者低着头发出无意义的发声。我们通过口技效应来测量视听整合,口技效应量化了感知到的音频位置向视频位置偏移的程度。将心理物理学与计算建模相结合,我们表明观察者使用非最优启发式方法解决视听因果推理问题,然而这种方法能高精度地近似最优贝叶斯推理。值得注意的是,当信号传达一致的交流意图时,参与者表现出更强的整合声音和身体信息的倾向,这表明语用对应关系增强了多感官整合。总体而言,我们的研究结果提供了新颖且有说服力的证据,表明面对面交流是由对多感官信号应如何构建和解释的根深蒂固的期望所塑造的。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f38d/12448992/133e4c8bf835/pcbi.1013468.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f38d/12448992/c1fdbea549f2/pcbi.1013468.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f38d/12448992/0b32e4c1af13/pcbi.1013468.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f38d/12448992/e6e9797ab4be/pcbi.1013468.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f38d/12448992/bf99e8594869/pcbi.1013468.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f38d/12448992/133e4c8bf835/pcbi.1013468.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f38d/12448992/c1fdbea549f2/pcbi.1013468.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f38d/12448992/0b32e4c1af13/pcbi.1013468.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f38d/12448992/e6e9797ab4be/pcbi.1013468.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f38d/12448992/bf99e8594869/pcbi.1013468.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f38d/12448992/133e4c8bf835/pcbi.1013468.g005.jpg

相似文献

1
Prior expectations guide multisensory integration during face-to-face communication.先前的预期在面对面交流过程中引导多感官整合。
PLoS Comput Biol. 2025 Sep 12;21(9):e1013468. doi: 10.1371/journal.pcbi.1013468. eCollection 2025 Sep.
2
Prescription of Controlled Substances: Benefits and Risks管制药品的处方:益处与风险
3
Auditory-Perceptual Evaluation of Situationally-Bound Judgements of Listener Comfort for Postlaryngectomy Voice and Speech.喉切除术后嗓音和言语情境性听觉舒适度判断的听觉感知评估
Int J Lang Commun Disord. 2025 Sep-Oct;60(5):e70114. doi: 10.1111/1460-6984.70114.
4
Impaired neural encoding of naturalistic audiovisual speech in autism.自闭症患者对自然主义视听言语的神经编码受损。
Neuroimage. 2025 Sep;318:121397. doi: 10.1016/j.neuroimage.2025.121397. Epub 2025 Jul 30.
5
Post-pandemic planning for maternity care for local, regional, and national maternity systems across the four nations: a mixed-methods study.针对四个地区的地方、区域和国家孕产妇保健系统的疫情后规划:一项混合方法研究。
Health Soc Care Deliv Res. 2025 Sep;13(35):1-25. doi: 10.3310/HHTE6611.
6
Parents' and informal caregivers' views and experiences of communication about routine childhood vaccination: a synthesis of qualitative evidence.父母及非正式照料者关于儿童常规疫苗接种沟通的观点与经历:定性证据综述
Cochrane Database Syst Rev. 2017 Feb 7;2(2):CD011787. doi: 10.1002/14651858.CD011787.pub2.
7
Developing evidence-based guidelines for describing potential benefits and harms within patient information leaflets/sheets (PILs) that inform and do not cause harm (PrinciPILs).制定基于证据的指南,用于在患者信息单页/说明书(PrinciPILs)中描述潜在益处和危害,这些信息单页既能提供信息又不会造成伤害。
Health Technol Assess. 2025 Aug;29(43):1-20. doi: 10.3310/GJJH2402.
8
The effectiveness and acceptability of multimedia information when recruiting children and young people to trials: the TRECA meta-analysis of SWATs.招募儿童和青少年参与试验时多媒体信息的效果和可接受性:SWAT 的 TRECA 荟萃分析。
Health Soc Care Deliv Res. 2023 Nov;11(24):1-112. doi: 10.3310/HTPM3841.
9
"In a State of Flow": A Qualitative Examination of Autistic Adults' Phenomenological Experiences of Task Immersion.“心流状态”:对自闭症成年人任务沉浸现象学体验的质性研究
Autism Adulthood. 2024 Sep 16;6(3):362-373. doi: 10.1089/aut.2023.0032. eCollection 2024 Sep.
10
Prosodic skills in Spanish-speaking adolescents and young adults with Down syndrome.西班牙语为母语的唐氏综合征青少年和成年人的韵律技能。
Int J Lang Commun Disord. 2024 Jul-Aug;59(4):1284-1295. doi: 10.1111/1460-6984.13001. Epub 2023 Dec 28.

本文引用的文献

1
Understanding discourse in face-to-face settings: The impact of multimodal cues and listening conditions.理解面对面情境中的话语:多模态线索和倾听条件的影响。
J Exp Psychol Learn Mem Cogn. 2025 May;51(5):837-854. doi: 10.1037/xlm0001399. Epub 2024 Oct 14.
2
Attentional Capture and Control.注意力的捕获与控制
Annu Rev Psychol. 2025 Jan;76(1):251-273. doi: 10.1146/annurev-psych-011624-025340. Epub 2024 Dec 3.
3
Gestures speed up responses to questions.手势能加快对问题的反应速度。
Lang Cogn Neurosci. 2024 Feb 17;39(4):423-430. doi: 10.1080/23273798.2024.2314021. eCollection 2024.
4
Visual Preference for Socially Relevant Spatial Relations in Humans and Monkeys.人类和猴子对与社会相关的空间关系的视觉偏好。
Psychol Sci. 2024 Jun;35(6):681-693. doi: 10.1177/09567976241242995. Epub 2024 Apr 29.
5
Spatial hearing training in virtual reality with simulated asymmetric hearing loss.虚拟现实环境下模拟非对称听力损失的空间听觉训练。
Sci Rep. 2024 Jan 30;14(1):2469. doi: 10.1038/s41598-024-51892-0.
6
Self as a prior: The malleability of Bayesian multisensory integration to social salience.自我为先:贝叶斯多感觉整合对社会显著性的可塑造性。
Br J Psychol. 2024 May;115(2):185-205. doi: 10.1111/bjop.12683. Epub 2023 Sep 25.
7
Are social interactions preferentially attended in real-world scenes? Evidence from change blindness.社交互动在现实场景中是否更受关注?来自变化盲视的证据。
Q J Exp Psychol (Hove). 2023 Oct;76(10):2293-2302. doi: 10.1177/17470218231161044. Epub 2023 Mar 26.
8
Multimodal processing in face-to-face interactions: A bridging link between psycholinguistics and sensory neuroscience.面对面互动中的多模态处理:心理语言学与感觉神经科学之间的桥梁纽带。
Front Hum Neurosci. 2023 Feb 2;17:1108354. doi: 10.3389/fnhum.2023.1108354. eCollection 2023.
9
Interactionally Embedded Gestalt Principles of Multimodal Human Communication.交互嵌入的多模态人类交流格式塔原理。
Perspect Psychol Sci. 2023 Sep;18(5):1136-1159. doi: 10.1177/17456916221141422. Epub 2023 Jan 12.
10
Attentional bias towards social interactions during viewing of naturalistic scenes.观看自然场景时对社交互动的注意力偏向。
Q J Exp Psychol (Hove). 2023 Oct;76(10):2303-2311. doi: 10.1177/17470218221140879. Epub 2022 Dec 21.