Suppr超能文献

视觉语音线索在噪声环境下言语理解中的处理依赖于工作记忆容量,并增强了听力障碍老年人的神经言语跟踪。

Processing of Visual Speech Cues in Speech-in-Noise Comprehension Depends on Working Memory Capacity and Enhances Neural Speech Tracking in Older Adults With Hearing Impairment.

机构信息

Computational Neuroscience of Speech and Hearing, Department of Computational Linguistics, University of Zurich, Zurich, Switzerland.

International Max Planck Research School for the Life Course: Evolutionary and Ontogenetic Dynamics (LIFE), Berlin, Germany.

出版信息

Trends Hear. 2024 Jan-Dec;28:23312165241287622. doi: 10.1177/23312165241287622.

Abstract

Comprehending speech in noise (SiN) poses a challenge for older hearing-impaired listeners, requiring auditory and working memory resources. Visual speech cues provide additional sensory information supporting speech understanding, while the extent of such visual benefit is characterized by large variability, which might be accounted for by individual differences in working memory capacity (WMC). In the current study, we investigated behavioral and neurofunctional (i.e., neural speech tracking) correlates of auditory and audio-visual speech comprehension in babble noise and the associations with WMC. Healthy older adults with hearing impairment quantified by pure-tone hearing loss (threshold average: 31.85-57 dB,  = 67) listened to sentences in babble noise in audio-only, visual-only and audio-visual speech modality and performed a pattern matching and a comprehension task, while electroencephalography (EEG) was recorded. Behaviorally, no significant difference in task performance was observed across modalities. However, we did find a significant association between individual working memory capacity and task performance, suggesting a more complex interplay between audio-visual speech cues, working memory capacity and real-world listening tasks. Furthermore, we found that the visual speech presentation was accompanied by increased cortical tracking of the speech envelope, particularly in a right-hemispheric auditory topographical cluster. Post-hoc, we investigated the potential relationships between the behavioral performance and neural speech tracking but were not able to establish a significant association. Overall, our results show an increase in neurofunctional correlates of speech associated with congruent visual speech cues, specifically in a right auditory cluster, suggesting multisensory integration.

摘要

理解噪声中的言语(SiN)对老年听力障碍者来说是一项挑战,需要听觉和工作记忆资源。视觉言语线索提供了支持言语理解的额外感官信息,而这种视觉益处的程度具有很大的可变性,这可能归因于工作记忆容量(WMC)的个体差异。在当前的研究中,我们调查了行为和神经功能(即神经言语跟踪)与听觉和视听言语理解在噪声中的相关性,以及与 WMC 的关联。通过纯音听力损失(阈值平均:31.85-57dB,n=67)量化的听力受损的健康老年成年人在仅听觉、仅视觉和视听言语模态下听噪声中的句子,并执行模式匹配和理解任务,同时记录脑电图(EEG)。行为上,在不同模态下任务表现没有显著差异。然而,我们确实发现个体工作记忆能力与任务表现之间存在显著关联,这表明在视听言语线索、工作记忆能力和现实世界听力任务之间存在更复杂的相互作用。此外,我们发现视觉言语呈现伴随着语音包络的皮质跟踪增加,特别是在右半球听觉拓扑集群中。事后,我们研究了行为表现和神经言语跟踪之间的潜在关系,但未能建立显著关联。总的来说,我们的结果表明,与一致的视觉言语线索相关的言语神经功能相关性增加,特别是在右听觉集群中,这表明了多感官整合。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/e1f8/11520018/708b0616d6a4/10.1177_23312165241287622-fig1.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验