Sarré Annahita, Cohen Laurent
Inserm U 1127, CNRS UMR 7225, Sorbonne Universités, Institut du Cerveau, ICM, Paris, France.
AP-HP, Hôpital de La Pitié Salpêtrière, Fédération de Neurologie, Paris, France.
Imaging Neurosci (Camb). 2025 Jun 24;3. doi: 10.1162/IMAG.a.53. eCollection 2025.
For many deaf people, lip-reading plays a major role in verbal communication. However, lip movements are by nature ambiguous, so that lip-reading does not allow for a full understanding of speech. The resulting language access difficulties may have serious consequences on language, cognitive and social development. Cued speech (CS) was developed to eliminate this ambiguity by complementing lip-reading with hand gestures, giving access to the entire phonological content of speech through the visual modality alone. Despite its proven efficiency for improving linguistic and communicative abilities, the mechanisms of CS perception remain largely unknown. The goal of the present study is to delineate the brain regions involved in CS perception and identify their role in visual and language-related processes. Three matched groups of participants were scanned during the presentation of videos of silent CS sentences, isolated lip movements, isolated gestures, plus CS sentences with speech sounds, and meaningless CS sentences: Prelingually deaf users of CS, hearing users of CS, and naïve hearing controls. We delineated a number of mostly left-hemisphere brain regions involved in CS perception. We first found that language areas were activated in all groups by both silent CS sentences and isolated lip movements, and by gestures in deaf participants only. Despite overlapping activations when perceiving CS, several findings differentiated experts from novices. The Visual Word Form Area, which supports the interface between vision and language during reading, was activated by isolated gestures in deaf CS users. In contrast, the Bayes factor indicated either weak evidence of no activation or negligible evidence of activation in hearing and control groups. Moreover, the integration of lip movements and gestures took place in a temporal language-related region in deaf users, and in movement-related regions in hearing users, reflecting their different profile of expertise in CS comprehension and production. Finally, we observed a strong involvement of the Dorsal Attentional Network in hearing users of CS, and identified the neural correlates of the variability in individual proficiency. Cued speech constitutes a novel pathway for accessing core language processes, halfway between speech perception and reading. The current study provides a delineation of the common and specific brain structures supporting those different modalities of language input, paving the way for further research.
对于许多聋人来说,唇读在言语交流中起着重要作用。然而,唇部动作本质上具有模糊性,因此唇读无法让人完全理解言语。由此产生的语言获取困难可能会对语言、认知和社会发展产生严重影响。提示语(CS)的开发旨在通过用手势补充唇读来消除这种模糊性,仅通过视觉模态就能获取言语的全部音系内容。尽管其在提高语言和交际能力方面的有效性已得到证实,但CS感知的机制在很大程度上仍然未知。本研究的目的是描绘参与CS感知的脑区,并确定它们在视觉和语言相关过程中的作用。在呈现无声CS句子、孤立的唇部动作、孤立的手势、带有语音的CS句子以及无意义的CS句子的视频时,对三组匹配的参与者进行了扫描:CS的语前聋使用者、CS的听力使用者以及未接触过CS的听力对照组。我们描绘了一些主要位于左半球的参与CS感知的脑区。我们首先发现,所有组中,无声CS句子和孤立的唇部动作都会激活语言区域,而只有聋人参与者的手势会激活语言区域。尽管在感知CS时存在重叠激活,但有几个发现区分了专家和新手。在阅读过程中支持视觉与语言接口的视觉词形区,在聋人CS使用者中被孤立的手势激活。相比之下,贝叶斯因子表明,在听力组和对照组中,要么没有激活的证据很弱,要么激活的证据可以忽略不计。此外,唇动和手势的整合在聋人使用者与语言相关的颞叶区域进行,而在听力使用者中则在与运动相关的区域进行,这反映了他们在CS理解和生成方面不同的专业水平。最后,我们观察到背侧注意网络在CS的听力使用者中发挥了重要作用,并确定了个体熟练度差异的神经关联。提示语构成了一条通向核心语言过程的新途径,介于言语感知和阅读之间。当前的研究描绘了支持这些不同语言输入方式的共同和特定脑结构,为进一步研究铺平了道路。