• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相似文献

1
Some behavioral and neurobiological constraints on theories of audiovisual speech integration: a review and suggestions for new directions.视听言语整合理论的一些行为和神经生物学限制:综述与新方向建议
Seeing Perceiving. 2011;24(6):513-39. doi: 10.1163/187847611X595864. Epub 2011 Sep 29.
2
Neural Mechanisms Underlying Cross-Modal Phonetic Encoding.跨模态语音编码的神经机制。
J Neurosci. 2018 Feb 14;38(7):1835-1849. doi: 10.1523/JNEUROSCI.1566-17.2017. Epub 2017 Dec 20.
3
How are visemes and graphemes integrated with speech sounds during spoken word recognition? ERP evidence for supra-additive responses during audiovisual compared to auditory speech processing.在口语单词识别过程中,视位和字素是如何与语音整合的?与听觉语音处理相比,视听语音处理过程中超加性反应的ERP证据。
Brain Lang. 2022 Feb;225:105058. doi: 10.1016/j.bandl.2021.105058. Epub 2021 Dec 17.
4
Neurophysiological Indices of Audiovisual Speech Processing Reveal a Hierarchy of Multisensory Integration Effects.神经生理指标揭示视听言语加工中的多感觉整合效应层次结构。
J Neurosci. 2021 Jun 9;41(23):4991-5003. doi: 10.1523/JNEUROSCI.0906-20.2021. Epub 2021 Apr 6.
5
Electrophysiological evidence for a self-processing advantage during audiovisual speech integration.视听言语整合过程中自我加工优势的电生理证据。
Exp Brain Res. 2017 Sep;235(9):2867-2876. doi: 10.1007/s00221-017-5018-0. Epub 2017 Jul 4.
6
Timing in audiovisual speech perception: A mini review and new psychophysical data.视听言语感知中的时间因素:一篇小型综述及新的心理物理学数据
Atten Percept Psychophys. 2016 Feb;78(2):583-601. doi: 10.3758/s13414-015-1026-y.
7
Audiovisual integration of speech in a bistable illusion.双稳态错觉中语音的视听整合
Curr Biol. 2009 May 12;19(9):735-9. doi: 10.1016/j.cub.2009.03.019. Epub 2009 Apr 2.
8
Audiovisual matching in speech and nonspeech sounds: a neurodynamical model.言语和非言语声音的视听匹配:一种神经动力学模型。
J Cogn Neurosci. 2010 Feb;22(2):240-7. doi: 10.1162/jocn.2009.21202.
9
The timing of visual speech modulates auditory neural processing.视觉语音的时间调制听觉神经处理。
Brain Lang. 2022 Dec;235:105196. doi: 10.1016/j.bandl.2022.105196. Epub 2022 Oct 28.
10
Prediction and constraint in audiovisual speech perception.视听言语感知中的预测与约束
Cortex. 2015 Jul;68:169-81. doi: 10.1016/j.cortex.2015.03.006. Epub 2015 Mar 20.

引用本文的文献

1
Increased Connectivity among Sensory and Motor Regions during Visual and Audiovisual Speech Perception.在视觉和视听言语感知过程中,感觉和运动区域之间的连通性增加。
J Neurosci. 2022 Jan 19;42(3):435-442. doi: 10.1523/JNEUROSCI.0114-21.2021. Epub 2021 Nov 23.
2
Audiovisual sentence recognition not predicted by susceptibility to the McGurk effect.视听句子识别不受麦格克效应易感性的预测。
Atten Percept Psychophys. 2017 Feb;79(2):396-403. doi: 10.3758/s13414-016-1238-9.
3
Sensory-Cognitive Interactions in Older Adults.老年人的感觉-认知交互作用
Ear Hear. 2016 Jul-Aug;37 Suppl 1(Suppl 1):52S-61S. doi: 10.1097/AUD.0000000000000303.
4
Parallel linear dynamic models can mimic the McGurk effect in clinical populations.并行线性动态模型可以模拟临床人群中的麦格克效应。
J Comput Neurosci. 2016 Oct;41(2):143-55. doi: 10.1007/s10827-016-0610-z. Epub 2016 Jun 7.
5
Multisensory perception as an associative learning process.多感官知觉作为一种联想学习过程。
Front Psychol. 2014 Sep 26;5:1095. doi: 10.3389/fpsyg.2014.01095. eCollection 2014.
6
Enhanced audiovisual integration with aging in speech perception: a heightened McGurk effect in older adults.随着年龄的增长,听觉与视觉的整合能力在言语感知中增强:老年人的麦格克效应增强。
Front Psychol. 2014 Apr 14;5:323. doi: 10.3389/fpsyg.2014.00323. eCollection 2014.
7
Multisensory integration, learning, and the predictive coding hypothesis.多感官整合、学习与预测编码假说。
Front Psychol. 2014 Mar 24;5:257. doi: 10.3389/fpsyg.2014.00257. eCollection 2014.
8
Neural dynamics of audiovisual speech integration under variable listening conditions: an individual participant analysis.视听言语整合的神经动力学:变听条件下的个体参与者分析。
Front Psychol. 2013 Sep 10;4:615. doi: 10.3389/fpsyg.2013.00615. eCollection 2013.
9
Speech through ears and eyes: interfacing the senses with the supramodal brain.通过耳朵和眼睛说话:感觉与超模态大脑的接口。
Front Psychol. 2013 Jul 12;4:388. doi: 10.3389/fpsyg.2013.00388. eCollection 2013.

本文引用的文献

1
A Longitudinal Study of Audiovisual Speech Perception by Children with Hearing Loss Who have Cochlear Implants.一项针对接受人工耳蜗植入的听力损失儿童视听言语感知的纵向研究。
Volta Rev. 2003;103(4):347-370.
2
Crossmodal Source Identification in Speech Perception.语音感知中的跨模态声源识别
Ecol Psychol. 2004;16(3):159-187. doi: 10.1207/s15326969eco1603_1.
3
Nice Guys Finish Fast and Bad Guys Finish Last: Facilitatory vs. Inhibitory Interaction in Parallel Systems.好人先完成,坏人最后完成:并行系统中的促进性与抑制性相互作用。
J Math Psychol. 2011 Apr 1;55(2):176-190. doi: 10.1016/j.jmp.2010.11.003.
4
The optimal time window of visual-auditory integration: a reaction time analysis.视觉-听觉整合的最佳时间窗口:反应时间分析。
Front Integr Neurosci. 2010 May 11;4:11. doi: 10.3389/fnint.2010.00011. eCollection 2010.
5
Visual enhancement of the information representation in auditory cortex.听觉皮层中信息表示的视觉增强。
Curr Biol. 2010 Jan 12;20(1):19-24. doi: 10.1016/j.cub.2009.10.068. Epub 2009 Dec 31.
6
Spatial organization of multisensory responses in temporal association cortex.颞叶联合皮层中多感官反应的空间组织
J Neurosci. 2009 Sep 23;29(38):11924-32. doi: 10.1523/JNEUROSCI.3437-09.2009.
7
The natural statistics of audiovisual speech.视听语音的自然统计学
PLoS Comput Biol. 2009 Jul;5(7):e1000436. doi: 10.1371/journal.pcbi.1000436. Epub 2009 Jul 17.
8
Crossmodal interaction in speeded responses: time window of integration model.快速反应中的跨通道交互:整合模型的时间窗口
Prog Brain Res. 2009;174:119-35. doi: 10.1016/S0079-6123(09)01311-9.
9
Mismatch negativity with visual-only and audiovisual speech.仅视觉和视听言语的失配负波。
Brain Topogr. 2009 May;21(3-4):207-15. doi: 10.1007/s10548-009-0094-5. Epub 2009 Apr 30.
10
Not just for bimodal neurons anymore: the contribution of unimodal neurons to cortical multisensory processing.不再仅仅适用于双峰神经元:单峰神经元对皮质多感觉处理的贡献。
Brain Topogr. 2009 May;21(3-4):157-67. doi: 10.1007/s10548-009-0088-3. Epub 2009 Mar 27.

视听言语整合理论的一些行为和神经生物学限制:综述与新方向建议

Some behavioral and neurobiological constraints on theories of audiovisual speech integration: a review and suggestions for new directions.

作者信息

Altieri Nicholas, Pisoni David B, Townsend James T

机构信息

Department of Psychology, University of Oklahoma, OK 73072, USA.

出版信息

Seeing Perceiving. 2011;24(6):513-39. doi: 10.1163/187847611X595864. Epub 2011 Sep 29.

DOI:10.1163/187847611X595864
PMID:21968081
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC3293210/
Abstract

Summerfield (1987) proposed several accounts of audiovisual speech perception, a field of research that has burgeoned in recent years. The proposed accounts included the integration of discrete phonetic features, vectors describing the values of independent acoustical and optical parameters, the filter function of the vocal tract, and articulatory dynamics of the vocal tract. The latter two accounts assume that the representations of audiovisual speech perception are based on abstract gestures, while the former two assume that the representations consist of symbolic or featural information obtained from visual and auditory modalities. Recent converging evidence from several different disciplines reveals that the general framework of Summerfield's feature-based theories should be expanded. An updated framework building upon the feature-based theories is presented. We propose a processing model arguing that auditory and visual brain circuits provide facilitatory information when the inputs are correctly timed, and that auditory and visual speech representations do not necessarily undergo translation into a common code during information processing. Future research on multisensory processing in speech perception should investigate the connections between auditory and visual brain regions, and utilize dynamic modeling tools to further understand the timing and information processing mechanisms involved in audiovisual speech integration.

摘要

萨默菲尔德(1987年)提出了几种关于视听言语感知的观点,这是一个近年来迅速发展的研究领域。所提出的观点包括离散语音特征的整合、描述独立声学和光学参数值的向量、声道的滤波函数以及声道的发音动力学。后两种观点假设视听言语感知的表征基于抽象手势,而前两种观点假设表征由从视觉和听觉模态获得的符号或特征信息组成。来自几个不同学科的最新汇聚证据表明,萨默菲尔德基于特征的理论的总体框架应该得到扩展。本文提出了一个基于这些基于特征的理论的更新框架。我们提出了一个处理模型,认为当输入时间正确时,听觉和视觉脑回路会提供促进信息,并且听觉和视觉言语表征在信息处理过程中不一定会转化为共同代码。未来关于言语感知中多感官处理的研究应该调查听觉和视觉脑区之间的联系,并利用动态建模工具来进一步理解视听言语整合中涉及的时间和信息处理机制。