Suppr超能文献

听觉-触觉言语训练后静息态功能连接的变化

Resting-state functional connectivity changes following audio-tactile speech training.

作者信息

Cieśla Katarzyna, Wolak Tomasz, Amedi Amir

机构信息

The Baruch Ivcher Institute for Brain, Cognition, and Technology, The Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel.

The Ruth and Meir Rosenthal Brain Imaging Center, Reichman University, Herzliya, Israel.

出版信息

Front Neurosci. 2025 Apr 29;19:1482828. doi: 10.3389/fnins.2025.1482828. eCollection 2025.

Abstract

Understanding speech in background noise is a challenging task, especially when the signal is also distorted. In a series of previous studies, we have shown that comprehension can improve if, simultaneously with auditory speech, the person receives speech-extracted low-frequency signals on their fingertips. The effect increases after short audio-tactile speech training. In this study, we used resting-state functional magnetic resonance imaging (rsfMRI) to measure spontaneous low-frequency oscillations in the brain while at rest to assess training-induced changes in functional connectivity. We observed enhanced functional connectivity (FC) within a right-hemisphere cluster corresponding to the middle temporal motion area (MT), the extrastriate body area (EBA), and the lateral occipital cortex (LOC), which, before the training, was found to be more connected to the bilateral dorsal anterior insula. Furthermore, early visual areas demonstrated a switch from increased connectivity with the auditory cortex before training to increased connectivity with a sensory/multisensory association parietal hub, contralateral to the palm receiving vibrotactile inputs, after training. In addition, the right sensorimotor cortex, including finger representations, was more connected internally after the training. The results altogether can be interpreted within two main complementary frameworks. The first, speech-specific, factor relates to the pre-existing brain connectivity for audio-visual speech processing, including early visual, motion, and body regions involved in lip-reading and gesture analysis under difficult acoustic conditions, upon which the new audio-tactile speech network might be built. The other framework refers to spatial/body awareness and audio-tactile integration, both of which are necessary for performing the task, including in the revealed parietal and insular regions. It is possible that an extended training period is necessary to directly strengthen functional connections between the auditory and the sensorimotor brain regions for the utterly novel multisensory task. The results contribute to a better understanding of the largely unknown neuronal mechanisms underlying tactile speech benefits for speech comprehension and may be relevant for rehabilitation in the hearing-impaired population.

摘要

在背景噪音中理解语音是一项具有挑战性的任务,尤其是当信号也被扭曲时。在之前的一系列研究中,我们已经表明,如果在接收听觉语音的同时,人们在指尖接收到语音提取的低频信号,那么理解能力可以得到提高。经过简短的听觉-触觉语音训练后,这种效果会增强。在本研究中,我们使用静息态功能磁共振成像(rsfMRI)来测量大脑在静息状态下的自发低频振荡,以评估训练引起的功能连接变化。我们观察到,在与颞中运动区(MT)、纹外体区(EBA)和枕外侧皮质(LOC)相对应的右半球簇内,功能连接(FC)增强,在训练前,该区域被发现与双侧背侧前岛叶的连接更为紧密。此外,早期视觉区域显示出从训练前与听觉皮层连接增加,转变为训练后与接受振动触觉输入的手掌对侧的感觉/多感觉联合顶叶枢纽连接增加。此外,包括手指表征在内的右侧感觉运动皮层在训练后内部连接更为紧密。总的来说,这些结果可以在两个主要的互补框架内进行解释。第一个特定于语音的因素与用于视听语音处理的预先存在的大脑连接有关,包括在困难声学条件下参与唇读和手势分析的早期视觉、运动和身体区域,新的听觉-触觉语音网络可能建立在此基础上。另一个框架涉及空间/身体意识和听觉-触觉整合,这两者对于执行任务都是必要的,包括在揭示的顶叶和岛叶区域。对于这项全新的多感觉任务,可能需要更长的训练期来直接加强听觉和感觉运动脑区之间的功能连接。这些结果有助于更好地理解触觉语音对语音理解有益的基本未知神经元机制,并且可能与听力受损人群的康复相关。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/5431/12069311/659c80db608e/fnins-19-1482828-g0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验