Division of Biology and Biological Engineering, California Institute of Technology, Pasadena, CA, USA.
T&C Chen Brain-Machine Interface Center, California Institute of Technology, Pasadena, CA, USA.
Nat Hum Behav. 2024 Jun;8(6):1136-1149. doi: 10.1038/s41562-024-01867-y. Epub 2024 May 13.
Speech brain-machine interfaces (BMIs) translate brain signals into words or audio outputs, enabling communication for people having lost their speech abilities due to diseases or injury. While important advances in vocalized, attempted and mimed speech decoding have been achieved, results for internal speech decoding are sparse and have yet to achieve high functionality. Notably, it is still unclear from which brain areas internal speech can be decoded. Here two participants with tetraplegia with implanted microelectrode arrays located in the supramarginal gyrus (SMG) and primary somatosensory cortex (S1) performed internal and vocalized speech of six words and two pseudowords. In both participants, we found significant neural representation of internal and vocalized speech, at the single neuron and population level in the SMG. From recorded population activity in the SMG, the internally spoken and vocalized words were significantly decodable. In an offline analysis, we achieved average decoding accuracies of 55% and 24% for each participant, respectively (chance level 12.5%), and during an online internal speech BMI task, we averaged 79% and 23% accuracy, respectively. Evidence of shared neural representations between internal speech, word reading and vocalized speech processes was found in participant 1. SMG represented words as well as pseudowords, providing evidence for phonetic encoding. Furthermore, our decoder achieved high classification with multiple internal speech strategies (auditory imagination/visual imagination). Activity in S1 was modulated by vocalized but not internal speech in both participants, suggesting no articulator movements of the vocal tract occurred during internal speech production. This work represents a proof-of-concept for a high-performance internal speech BMI.
言语脑机接口 (BMI) 将脑信号转换为文字或音频输出,使因疾病或损伤而丧失言语能力的人能够进行交流。虽然在发声、尝试和模仿言语解码方面取得了重要进展,但内部言语解码的结果仍然很少,并且尚未实现高功能。值得注意的是,目前还不清楚哪些脑区可以对内部言语进行解码。两名患有四肢瘫痪症的参与者使用植入的微电极阵列位于缘上回 (SMG) 和初级体感皮层 (S1),进行了六个单词和两个伪词的内部和发声言语。在两名参与者中,我们在 SMG 中的单个神经元和群体水平上发现了内部和发声言语的显著神经表示。从 SMG 记录的群体活动中,内部和发声的单词可以被显著解码。在离线分析中,我们分别为每个参与者实现了 55%和 24%的平均解码准确率(机会水平为 12.5%),在在线内部言语 BMI 任务中,我们分别平均实现了 79%和 23%的准确率。在参与者 1 中发现了内部言语、单词阅读和发声言语过程之间共享神经表示的证据。SMG 不仅代表单词,还代表伪词,为语音编码提供了证据。此外,我们的解码器通过多种内部言语策略(听觉想象/视觉想象)实现了高精度分类。在两名参与者中,S1 的活动都被发声言语所调制,但不受内部言语的调制,这表明在内部言语产生期间,声道的发音器官没有运动。这项工作代表了高性能内部言语 BMI 的概念验证。