• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

句子语义判断任务中言语感知跨模态交互的神经关联:一项正电子发射断层扫描(PET)研究

The neural correlates of cross-modal interaction in speech perception during a semantic decision task on sentences: a PET study.

作者信息

Kang Eunjoo, Lee Dong Soo, Kang Hyejin, Hwang Chan Ho, Oh Seung-Ha, Kim Chong-Sun, Chung June-Key, Lee Myung Chul

机构信息

Department of Nuclear Medicine, Seoul National University, 28 Yeongeon-dong, Jongno-gu, Seoul 110-744, Republic of Korea.

出版信息

Neuroimage. 2006 Aug 1;32(1):423-31. doi: 10.1016/j.neuroimage.2006.03.016. Epub 2006 Apr 27.

DOI:10.1016/j.neuroimage.2006.03.016
PMID:16644239
Abstract

Speech perception in face-to-face conversation involves processing of speech sounds (auditory) and speech-associated mouth/lip movements (visual) from a speaker. Using PET where no scanner noise was present, brain regions involved in speech cue processing were investigated with the normal hearing subjects with no previous lip-reading training (N = 17) carrying out a semantic plausibility decision on spoken sentences delivered in a movie file. Multimodality was ensured at the sensory level in all four conditions. Sensory-specific speech cue of one sensory modality, i.e., auditory speech (A condition) or mouth movement (V condition), was delivered with a control stimulus of the other modality whereas speech cues of both sensory modalities (AV condition) were delivered during bimodal condition. In comparison to the control condition, extensive activations in the superior temporal regions were observed bilaterally during the A condition but these activations were reduced in extent and left lateralized during the AV condition. Polymodal region such as left posterior superior temporal sulcus (pSTS) involved in cross-modal interaction/integration of audiovisual speech was found to be activated during the A and more so during the AV conditions but not during the V condition. Activations were observed in Broca's (BA 44), medial frontal (BA 8), and anterior ventrolateral prefrontal (BA 47) regions in the left during the V condition, where lip-reading performance was less successful. Results indicated that the speech-associated lip movements (visual speech cue) rendered suppression on the activity in the right auditory temporal regions. Overadditivity (AV > A + V) observed in the right postcentral region during the bimodal condition relative to the sum of unimodal speech conditions was also associated with reduced activity during the V condition. These findings suggested that visual speech cue could exert an inhibitory modulatory effect on the brain activities in the right hemisphere during the cross-modal interaction of audiovisual speech perception.

摘要

在面对面交谈中的言语感知涉及对说话者的语音(听觉)和与言语相关的口部/唇部动作(视觉)进行处理。使用在无扫描仪噪音的情况下的正电子发射断层扫描(PET),对17名未经唇读训练的听力正常受试者进行研究,这些受试者对电影文件中播放的口语句子进行语义合理性判断,以此来探究参与言语线索处理的脑区。在所有四种情况下,都在感官层面确保了多模态。一种感官模态的特定感官言语线索,即听觉言语(A条件)或口部动作(V条件),与另一种模态的控制刺激同时呈现,而在双模态条件下则同时呈现两种感官模态的言语线索(AV条件)。与对照条件相比,在A条件下双侧颞上区观察到广泛激活,但在AV条件下,这些激活范围缩小且向左半球侧化。发现参与视听言语跨模态交互/整合的多模态区域,如左后颞上沟(pSTS),在A条件下被激活,在AV条件下激活更明显,但在V条件下未被激活。在V条件下,左侧的布洛卡区(BA 44)、内侧额叶(BA 8)和前腹外侧前额叶(BA 47)区域出现激活,此时唇读表现不太成功。结果表明,与言语相关的唇部动作(视觉言语线索)对右侧听觉颞区的活动产生了抑制作用。在双模态条件下,相对于单模态言语条件的总和,右侧中央后区观察到的超相加性(AV > A + V)也与V条件下的活动减少有关。这些发现表明,在视听言语感知的跨模态交互过程中,视觉言语线索可能对右半球的脑活动发挥抑制性调节作用。

相似文献

1
The neural correlates of cross-modal interaction in speech perception during a semantic decision task on sentences: a PET study.句子语义判断任务中言语感知跨模态交互的神经关联:一项正电子发射断层扫描(PET)研究
Neuroimage. 2006 Aug 1;32(1):423-31. doi: 10.1016/j.neuroimage.2006.03.016. Epub 2006 Apr 27.
2
Reading speech from still and moving faces: the neural substrates of visible speech.从静止和动态面部读取语音:可见语音的神经基础。
J Cogn Neurosci. 2003 Jan 1;15(1):57-70. doi: 10.1162/089892903321107828.
3
The contribution of visual areas to speech comprehension: a PET study in cochlear implants patients and normal-hearing subjects.视觉区域对言语理解的贡献:一项针对人工耳蜗植入患者和听力正常受试者的正电子发射断层扫描研究。
Neuropsychologia. 2002;40(9):1562-9. doi: 10.1016/s0028-3932(02)00023-4.
4
Mouth and Voice: A Relationship between Visual and Auditory Preference in the Human Superior Temporal Sulcus.嘴巴与声音:人类颞上沟中视觉与听觉偏好之间的关系。
J Neurosci. 2017 Mar 8;37(10):2697-2708. doi: 10.1523/JNEUROSCI.2914-16.2017. Epub 2017 Feb 8.
5
Auditory-visual speech perception examined by fMRI and PET.通过功能磁共振成像(fMRI)和正电子发射断层扫描(PET)检查视听语音感知。
Neurosci Res. 2003 Nov;47(3):277-87. doi: 10.1016/s0168-0102(03)00214-1.
6
Brain networks engaged in audiovisual integration during speech perception revealed by persistent homology-based network filtration.基于持久同调的网络过滤揭示了语音感知过程中参与视听整合的脑网络。
Brain Connect. 2015 May;5(4):245-58. doi: 10.1089/brain.2013.0218. Epub 2015 Mar 2.
7
Cross-modal binding and activated attentional networks during audio-visual speech integration: a functional MRI study.视听言语整合过程中的跨模态绑定与激活的注意力网络:一项功能磁共振成像研究
Cereb Cortex. 2005 Nov;15(11):1750-60. doi: 10.1093/cercor/bhi052. Epub 2005 Feb 16.
8
Increased Connectivity among Sensory and Motor Regions during Visual and Audiovisual Speech Perception.在视觉和视听言语感知过程中,感觉和运动区域之间的连通性增加。
J Neurosci. 2022 Jan 19;42(3):435-442. doi: 10.1523/JNEUROSCI.0114-21.2021. Epub 2021 Nov 23.
9
Inside Speech: Multisensory and Modality-specific Processing of Tongue and Lip Speech Actions.内心言语:舌头和嘴唇言语动作的多感官及特定模态处理
J Cogn Neurosci. 2017 Mar;29(3):448-466. doi: 10.1162/jocn_a_01057. Epub 2016 Oct 19.
10
Spatial and temporal factors during processing of audiovisual speech: a PET study.视听语音处理过程中的时空因素:一项正电子发射断层扫描研究。
Neuroimage. 2004 Feb;21(2):725-32. doi: 10.1016/j.neuroimage.2003.09.049.

引用本文的文献

1
Integrative interaction of emotional speech in audio-visual modality.视听模态下情感语音的整合交互
Front Neurosci. 2022 Nov 11;16:797277. doi: 10.3389/fnins.2022.797277. eCollection 2022.
2
Less is more: Removing a modality of an expected olfactory-visual stimulation enhances brain activation.少即是多:去除一种预期的嗅觉-视觉刺激方式会增强大脑激活。
Hum Brain Mapp. 2022 Jun 1;43(8):2567-2581. doi: 10.1002/hbm.25806. Epub 2022 Feb 10.
3
Neural networks for sentence comprehension and production: An ALE-based meta-analysis of neuroimaging studies.
基于 ALE 的神经影像学研究元分析:句子理解和生成的神经网络。
Hum Brain Mapp. 2019 Jun 1;40(8):2275-2304. doi: 10.1002/hbm.24523. Epub 2019 Jan 28.
4
Brain networks engaged in audiovisual integration during speech perception revealed by persistent homology-based network filtration.基于持久同调的网络过滤揭示了语音感知过程中参与视听整合的脑网络。
Brain Connect. 2015 May;5(4):245-58. doi: 10.1089/brain.2013.0218. Epub 2015 Mar 2.
5
Cortical integration of audio-visual speech and non-speech stimuli.视听言语和非言语刺激的皮质整合。
Brain Cogn. 2010 Nov;74(2):97-106. doi: 10.1016/j.bandc.2010.07.002. Epub 2010 Aug 14.
6
Neural correlates of semantic competition during processing of ambiguous words.歧义词加工过程中语义竞争的神经关联
J Cogn Neurosci. 2009 May;21(5):960-75. doi: 10.1162/jocn.2009.21073.