Suppr超能文献

言语的视听整合:“说”与“听”条件下准确性提高的证据。

Audiovisual integration of speech: evidence for increased accuracy in "talk" versus "listen" condition.

作者信息

Zografos Lefteris Themelis, Konstantoulaki Anna, Klein Christoph, Vatakis Argiro, Smyrnis Nikolaos

机构信息

Laboratory of Cognitive Neuroscience and Sensorimotor Control, University Mental Health Neurosciences and Precision Medicine Research Institute "COSTAS STEFANIS", Athens, Greece.

Multisensory and Temporal Processing Laboratory (MultiTimeLab), Department of Psychology, Panteion University of Social and Political Sciences, Athens, Greece.

出版信息

Exp Brain Res. 2025 May 26;243(6):154. doi: 10.1007/s00221-025-07088-7.

Abstract

Processing of sensory stimuli generated by our own actions differs from that of externally generated stimuli. However, most evidence regarding this phenomenon concerns the processing of unisensory stimuli. A few studies have explored the effect of self-generated actions on multisensory stimuli and how it affects the integration of these stimuli. Most of them used abstract stimuli (e.g., flashes, beeps) rather than more natural ones such as sensations that are commonly correlated with actions that we perform in our everyday lives such as speech. In the current study, we explored the effect of self-generated action on the process of multisensory integration (MSI) during speech. We used a novel paradigm where participants were either listening to the echo of their own speech, while watching a video of themselves producing the same speech ("talk", active condition), or they listened to their previously recorded speech and watched the prerecorded video of themselves producing the same speech ("listen", passive condition). In both conditions, different stimulus onset asynchronies were introduced between the auditory and visual streams and participants were asked to perform simultaneity judgments. Using these judgments, we determined temporal binding windows (TBW) of integration for each participant and condition. We found that the TBW was significantly smaller in the active as compared to the passive condition indicating more accurate MSI. These results support the conclusion that sensory perception is modulated by self-generated action at the multisensory in addition to the unisensory level.

摘要

我们自身行为产生的感觉刺激的处理方式与外部产生的刺激不同。然而,关于这一现象的大多数证据都涉及单感觉刺激的处理。一些研究探讨了自我产生的行为对多感觉刺激的影响以及它如何影响这些刺激的整合。其中大多数研究使用的是抽象刺激(如闪光、蜂鸣声),而不是更自然的刺激,比如与我们在日常生活中执行的行为(如说话)通常相关的感觉。在当前的研究中,我们探讨了自我产生的行为对言语过程中多感觉整合(MSI)的影响。我们使用了一种新颖的范式,在这种范式中,参与者要么在观看自己说出相同话语的视频时听自己话语的回声(“说话”,主动条件),要么听自己之前录制的话语并观看自己说出相同话语的预先录制视频(“倾听”,被动条件)。在这两种条件下,在听觉和视觉流之间引入了不同的刺激开始异步性,并要求参与者进行同步判断。利用这些判断,我们确定了每个参与者和条件下整合的时间绑定窗口(TBW)。我们发现,与被动条件相比,主动条件下的TBW显著更小,这表明MSI更准确。这些结果支持了这样的结论,即除了单感觉水平外,在多感觉水平上,感觉知觉也受到自我产生的行为的调节。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验