• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

言语和非言语声音的视听匹配:一种神经动力学模型。

Audiovisual matching in speech and nonspeech sounds: a neurodynamical model.

机构信息

Universitat Pompeu Fabra, Barcelona, Spain.

出版信息

J Cogn Neurosci. 2010 Feb;22(2):240-7. doi: 10.1162/jocn.2009.21202.

DOI:10.1162/jocn.2009.21202
PMID:19302007
Abstract

Audiovisual speech perception provides an opportunity to investigate the mechanisms underlying multimodal processing. By using nonspeech stimuli, it is possible to investigate the degree to which audiovisual processing is specific to the speech domain. It has been shown in a match-to-sample design that matching across modalities is more difficult in the nonspeech domain as compared to the speech domain. We constructed a biophysically realistic neural network model simulating this experimental evidence. We propose that a stronger connection between modalities in speech underlies the behavioral difference between the speech and the nonspeech domain. This could be the result of more extensive experience with speech stimuli. Because the match-to-sample paradigm does not allow us to draw conclusions concerning the integration of auditory and visual information, we also simulated two further conditions based on the same paradigm, which tested the integration of auditory and visual information within a single stimulus. New experimental data for these two conditions support the simulation results and suggest that audiovisual integration of discordant stimuli is stronger in speech than in nonspeech stimuli. According to the simulations, the connection strength between auditory and visual information, on the one hand, determines how well auditory information can be assigned to visual information, and on the other hand, it influences the magnitude of multimodal integration.

摘要

视听言语感知为研究多模态处理的机制提供了机会。通过使用非言语刺激,可以研究视听处理在多大程度上特定于言语领域。在匹配样本设计中已经表明,与言语领域相比,跨模态的匹配在非言语领域更困难。我们构建了一个生物物理上逼真的神经网络模型,模拟了这一实验证据。我们提出,在言语中模态之间更强的连接是言语和非言语领域之间行为差异的基础。这可能是由于与言语刺激的更多广泛的经验。由于匹配样本范式不允许我们根据听觉和视觉信息的整合得出结论,我们还根据相同的范式模拟了另外两个条件,该范式测试了单个刺激中听觉和视觉信息的整合。这两个条件的新实验数据支持了模拟结果,并表明在言语中,不和谐刺激的视听整合比在非言语刺激中更强。根据模拟,一方面,听觉和视觉信息之间的连接强度决定了听觉信息可以分配给视觉信息的程度,另一方面,它影响了多模态整合的程度。

相似文献

1
Audiovisual matching in speech and nonspeech sounds: a neurodynamical model.言语和非言语声音的视听匹配:一种神经动力学模型。
J Cogn Neurosci. 2010 Feb;22(2):240-7. doi: 10.1162/jocn.2009.21202.
2
Time course of early audiovisual interactions during speech and nonspeech central auditory processing: a magnetoencephalography study.言语和非言语中枢听觉处理过程中早期视听交互的时间进程:一项脑磁图研究。
J Cogn Neurosci. 2009 Feb;21(2):259-74. doi: 10.1162/jocn.2008.21019.
3
Cross-modal interactions during perception of audiovisual speech and nonspeech signals: an fMRI study.听觉-视觉语音和非语音信号感知过程中的跨模态相互作用:一项 fMRI 研究。
J Cogn Neurosci. 2011 Jan;23(1):221-37. doi: 10.1162/jocn.2010.21421.
4
Exposure to asynchronous audiovisual speech extends the temporal window for audiovisual integration.接触异步视听语音会延长视听整合的时间窗口。
Brain Res Cogn Brain Res. 2005 Oct;25(2):499-507. doi: 10.1016/j.cogbrainres.2005.07.009. Epub 2005 Aug 31.
5
Neurophysiological indices of speech and nonspeech stimulus processing.言语和非言语刺激处理的神经生理指标。
J Speech Lang Hear Res. 2005 Oct;48(5):1147-64. doi: 10.1044/1092-4388(2005/081).
6
Neural correlates of multisensory integration of ecologically valid audiovisual events.生态有效视听事件多感官整合的神经关联
J Cogn Neurosci. 2007 Dec;19(12):1964-73. doi: 10.1162/jocn.2007.19.12.1964.
7
Attention rivalry under irrelevant audiovisual stimulation.无关视听刺激下的注意竞争
Neurosci Lett. 2008 Jun 13;438(1):6-9. doi: 10.1016/j.neulet.2008.04.049. Epub 2008 Apr 18.
8
A novel approach to study audiovisual integration in speech perception: localizer fMRI and sparse sampling.一种研究言语感知中视听整合的新方法:定位功能磁共振成像和稀疏采样。
Brain Res. 2008 Jul 18;1220:142-9. doi: 10.1016/j.brainres.2007.08.027. Epub 2007 Aug 19.
9
A biologically motivated neural network for phase extraction from complex sounds.一种用于从复杂声音中提取相位的具有生物动机的神经网络。
Biol Cybern. 2004 Feb;90(2):98-104. doi: 10.1007/s00422-003-0459-x. Epub 2004 Feb 13.
10
A role for the inferior colliculus in multisensory speech integration.下丘在多感官言语整合中的作用。
Neuroreport. 2006 Oct 23;17(15):1607-10. doi: 10.1097/01.wnr.0000236856.93586.94.