• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

视觉系统中抽象语言类别的表示是成功唇读的基础。

A representation of abstract linguistic categories in the visual system underlies successful lipreading.

机构信息

Department of Biomedical Engineering, Department of Neuroscience, Del Monte Institute for Neuroscience, University of Rochester, Rochester, NY, USA.

Department of Psychology, University of Michigan, Ann Arbor, MI, USA.

出版信息

Neuroimage. 2023 Nov 15;282:120391. doi: 10.1016/j.neuroimage.2023.120391. Epub 2023 Sep 25.

DOI:10.1016/j.neuroimage.2023.120391
PMID:37757989
Abstract

There is considerable debate over how visual speech is processed in the absence of sound and whether neural activity supporting lipreading occurs in visual brain areas. Much of the ambiguity stems from a lack of behavioral grounding and neurophysiological analyses that cannot disentangle high-level linguistic and phonetic/energetic contributions from visual speech. To address this, we recorded EEG from human observers as they watched silent videos, half of which were novel and half of which were previously rehearsed with the accompanying audio. We modeled how the EEG responses to novel and rehearsed silent speech reflected the processing of low-level visual features (motion, lip movements) and a higher-level categorical representation of linguistic units, known as visemes. The ability of these visemes to account for the EEG - beyond the motion and lip movements - was significantly enhanced for rehearsed videos in a way that correlated with participants' trial-by-trial ability to lipread that speech. Source localization of viseme processing showed clear contributions from visual cortex, with no strong evidence for the involvement of auditory areas. We interpret this as support for the idea that the visual system produces its own specialized representation of speech that is (1) well-described by categorical linguistic features, (2) dissociable from lip movements, and (3) predictive of lipreading ability. We also suggest a reinterpretation of previous findings of auditory cortical activation during silent speech that is consistent with hierarchical accounts of visual and audiovisual speech perception.

摘要

关于在没有声音的情况下视觉语音是如何被处理的,以及支持唇读的神经活动是否发生在视觉大脑区域,存在相当大的争议。这种模糊性很大程度上源于缺乏行为基础和神经生理学分析,这些分析无法将视觉语音的高级语言和语音/能量贡献与低级视觉特征区分开来。为了解决这个问题,我们在人类观察者观看无声视频时记录了他们的 EEG,其中一半是新的,一半是之前伴随着音频进行排练的。我们构建了模型,以了解 EEG 对新的和排练的无声语音的反应如何反映低水平视觉特征(运动、嘴唇运动)和语言单位的更高层次的类别表示,即视位。这些视位除了运动和嘴唇运动之外,对排练视频的解释能力显著增强,这种增强与参与者逐次进行唇读的能力相关。视位处理的源定位显示出视觉皮层的明显贡献,没有强有力的证据表明听觉区域的参与。我们将这解释为支持以下观点的证据,即视觉系统产生自己的专门的语音表示,(1)可以用类别化的语言特征很好地描述,(2)与嘴唇运动分离,(3)可以预测唇读能力。我们还建议重新解释先前在无声语音期间听觉皮层激活的发现,这与视觉和视听语音感知的分层解释一致。

相似文献

1
A representation of abstract linguistic categories in the visual system underlies successful lipreading.视觉系统中抽象语言类别的表示是成功唇读的基础。
Neuroimage. 2023 Nov 15;282:120391. doi: 10.1016/j.neuroimage.2023.120391. Epub 2023 Sep 25.
2
Lip-Reading Enables the Brain to Synthesize Auditory Features of Unknown Silent Speech.唇读使大脑能够合成未知静音语音的听觉特征。
J Neurosci. 2020 Jan 29;40(5):1053-1065. doi: 10.1523/JNEUROSCI.1101-19.2019. Epub 2019 Dec 30.
3
Auditory cortex encodes lipreading information through spatially distributed activity.听觉皮层通过空间分布的活动来编码唇读信息。
Curr Biol. 2024 Sep 9;34(17):4021-4032.e5. doi: 10.1016/j.cub.2024.07.073. Epub 2024 Aug 16.
4
Differential Auditory and Visual Phase-Locking Are Observed during Audio-Visual Benefit and Silent Lip-Reading for Speech Perception.在听觉-视觉获益和默读唇语感知语音时,观察到听觉和视觉相位锁定的差异。
J Neurosci. 2022 Aug 3;42(31):6108-6120. doi: 10.1523/JNEUROSCI.2476-21.2022. Epub 2022 Jun 27.
5
Lipreading and covert speech production similarly modulate human auditory-cortex responses to pure tones.唇读和隐性言语产生同样调节人类听觉皮层对纯音的反应。
J Neurosci. 2010 Jan 27;30(4):1314-21. doi: 10.1523/JNEUROSCI.1950-09.2010.
6
Increased Connectivity among Sensory and Motor Regions during Visual and Audiovisual Speech Perception.在视觉和视听言语感知过程中,感觉和运动区域之间的连通性增加。
J Neurosci. 2022 Jan 19;42(3):435-442. doi: 10.1523/JNEUROSCI.0114-21.2021. Epub 2021 Nov 23.
7
Electrophysiological evidence for speech-specific audiovisual integration.言语特异性视听整合的电生理学证据。
Neuropsychologia. 2014 Jan;53:115-21. doi: 10.1016/j.neuropsychologia.2013.11.011. Epub 2013 Nov 27.
8
Activation of auditory cortex during silent lipreading.默读唇语时听觉皮层的激活。
Science. 1997 Apr 25;276(5312):593-6. doi: 10.1126/science.276.5312.593.
9
A Visual Cortical Network for Deriving Phonological Information from Intelligible Lip Movements.从可理解的唇动中提取语音信息的视皮层网络。
Curr Biol. 2018 May 7;28(9):1453-1459.e3. doi: 10.1016/j.cub.2018.03.044. Epub 2018 Apr 19.
10
Electrophysiological evidence for Audio-visuo-lingual speech integration.电生理证据表明听觉-视觉-语言的言语整合。
Neuropsychologia. 2018 Jan 31;109:126-133. doi: 10.1016/j.neuropsychologia.2017.12.024. Epub 2017 Dec 14.

引用本文的文献

1
Lip-Reading: Advances and Unresolved Questions in a Key Communication Skill.唇读:一项关键沟通技能的进展与未解决的问题
Audiol Res. 2025 Jul 21;15(4):89. doi: 10.3390/audiolres15040089.
2
Phonological representations of auditory and visual speech in the occipito-temporal cortex and beyond.枕颞叶皮层及其他区域中听觉和视觉言语的语音表征。
J Neurosci. 2025 Apr 30. doi: 10.1523/JNEUROSCI.1415-24.2025.
3
Dynamic modeling of EEG responses to natural speech reveals earlier processing of predictable words.脑电图对自然语音反应的动态建模揭示了可预测单词的早期处理过程。
PLoS Comput Biol. 2025 Apr 28;21(4):e1013006. doi: 10.1371/journal.pcbi.1013006. eCollection 2025 Apr.
4
A comparison of EEG encoding models using audiovisual stimuli and their unimodal counterparts.使用视听刺激与单模态刺激的 EEG 编码模型比较。
PLoS Comput Biol. 2024 Sep 9;20(9):e1012433. doi: 10.1371/journal.pcbi.1012433. eCollection 2024 Sep.
5
Auditory cortex encodes lipreading information through spatially distributed activity.听觉皮层通过空间分布的活动来编码唇读信息。
Curr Biol. 2024 Sep 9;34(17):4021-4032.e5. doi: 10.1016/j.cub.2024.07.073. Epub 2024 Aug 16.
6
Modality-Specific Perceptual Learning of Vocoded Auditory versus Lipread Speech: Different Effects of Prior Information.听觉编码语音与唇读语音的特定模态感知学习:先验信息的不同影响
Brain Sci. 2023 Jun 29;13(7):1008. doi: 10.3390/brainsci13071008.
7
Increases in sensory noise predict attentional disruptions to audiovisual speech perception.感觉噪声的增加预示着对视听言语感知的注意力干扰。
Front Hum Neurosci. 2023 Jan 4;16:1027335. doi: 10.3389/fnhum.2022.1027335. eCollection 2022.
8
Differential Auditory and Visual Phase-Locking Are Observed during Audio-Visual Benefit and Silent Lip-Reading for Speech Perception.在听觉-视觉获益和默读唇语感知语音时,观察到听觉和视觉相位锁定的差异。
J Neurosci. 2022 Aug 3;42(31):6108-6120. doi: 10.1523/JNEUROSCI.2476-21.2022. Epub 2022 Jun 27.
9
MEG Activity in Visual and Auditory Cortices Represents Acoustic Speech-Related Information during Silent Lip Reading.大脑磁图活动在安静默读时代表了视觉和听觉皮层中的与声学语音相关的信息。
eNeuro. 2022 Jun 27;9(3). doi: 10.1523/ENEURO.0209-22.2022. Print 2022 May-Jun.
10
Lipreading: A Review of Its Continuing Importance for Speech Recognition With an Acquired Hearing Loss and Possibilities for Effective Training.唇读:对获得性听力损失语音识别的持续重要性的综述及有效的训练可能性。
Am J Audiol. 2022 Jun 2;31(2):453-469. doi: 10.1044/2021_AJA-21-00112. Epub 2022 Mar 22.