Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, USA.
Weill Institute for Neuroscience, University of California, San Francisco, San Francisco, CA, USA.
Nat Biomed Eng. 2024 Aug;8(8):977-991. doi: 10.1038/s41551-024-01207-5. Epub 2024 May 20.
Advancements in decoding speech from brain activity have focused on decoding a single language. Hence, the extent to which bilingual speech production relies on unique or shared cortical activity across languages has remained unclear. Here, we leveraged electrocorticography, along with deep-learning and statistical natural-language models of English and Spanish, to record and decode activity from speech-motor cortex of a Spanish-English bilingual with vocal-tract and limb paralysis into sentences in either language. This was achieved without requiring the participant to manually specify the target language. Decoding models relied on shared vocal-tract articulatory representations across languages, which allowed us to build a syllable classifier that generalized across a shared set of English and Spanish syllables. Transfer learning expedited training of the bilingual decoder by enabling neural data recorded in one language to improve decoding in the other language. Overall, our findings suggest shared cortical articulatory representations that persist after paralysis and enable the decoding of multiple languages without the need to train separate language-specific decoders.
语音解码技术的进步主要集中在解码单一语言上。因此,双语言语产生在多大程度上依赖于跨语言的独特或共享皮质活动仍不清楚。在这里,我们利用皮层电图,以及英语和西班牙语的深度学习和统计自然语言模型,记录和解码一名患有声带和四肢瘫痪的西班牙-英语双语者言语运动皮层的活动,将其转化为两种语言的句子。这是在不要求参与者手动指定目标语言的情况下实现的。解码模型依赖于跨语言的共享声道发音表示,这使我们能够构建一个可以在共享的英语和西班牙语音节集中泛化的音节分类器。迁移学习通过允许在一种语言中记录的神经数据来提高另一种语言的解码能力,从而加快了双语解码器的训练。总的来说,我们的研究结果表明,在瘫痪后存在共享的皮质发音表示,并能够在不需要训练单独的语言特定解码器的情况下解码多种语言。