From the Departments of Neurological Surgery (N.S.C., M.W., C.I., X.H., T.S.-C., M.V.N., A.S., K.S., S.D.S., D.M.B.), Computer Science (X.H., M.V.N.), and Biomedical Engineering (T.S.-C., A.S.), University of California, Davis, Davis, and the Departments of Neurosurgery (D.R.D., E.Y.C., J.M.H.), Electrical Engineering (E.M.K.), and Computer Science (C.F.), the Wu Tsai Neurosciences Institute (E.M.K., J.M.H.), the Howard Hughes Medical Institute (F.R.W.), and Bio-X (J.M.H.), Stanford University, Stanford - both in California; the Departments of Radiology and Neuroscience, Washington University School of Medicine, Saint Louis (M.F.G.); the School of Engineering and Carney Institute for Brain Sciences, Brown University (L.R.H.), and the Center for Neurorestoration and Neurotechnology, Department of Veterans Affairs Office of Rehabilitation Research and Development, VA Providence Healthcare (L.R.H.) - both in Providence, RI; and the Center for Neurotechnology and Neurorecovery, Department of Neurology, Massachusetts General Hospital, Harvard Medical School, Boston (L.R.H.).
N Engl J Med. 2024 Aug 15;391(7):609-618. doi: 10.1056/NEJMoa2314132.
Brain-computer interfaces can enable communication for people with paralysis by transforming cortical activity associated with attempted speech into text on a computer screen. Communication with brain-computer interfaces has been restricted by extensive training requirements and limited accuracy.
A 45-year-old man with amyotrophic lateral sclerosis (ALS) with tetraparesis and severe dysarthria underwent surgical implantation of four microelectrode arrays into his left ventral precentral gyrus 5 years after the onset of the illness; these arrays recorded neural activity from 256 intracortical electrodes. We report the results of decoding his cortical neural activity as he attempted to speak in both prompted and unstructured conversational contexts. Decoded words were displayed on a screen and then vocalized with the use of text-to-speech software designed to sound like his pre-ALS voice.
On the first day of use (25 days after surgery), the neuroprosthesis achieved 99.6% accuracy with a 50-word vocabulary. Calibration of the neuroprosthesis required 30 minutes of cortical recordings while the participant attempted to speak, followed by subsequent processing. On the second day, after 1.4 additional hours of system training, the neuroprosthesis achieved 90.2% accuracy using a 125,000-word vocabulary. With further training data, the neuroprosthesis sustained 97.5% accuracy over a period of 8.4 months after surgical implantation, and the participant used it to communicate in self-paced conversations at a rate of approximately 32 words per minute for more than 248 cumulative hours.
In a person with ALS and severe dysarthria, an intracortical speech neuroprosthesis reached a level of performance suitable to restore conversational communication after brief training. (Funded by the Office of the Assistant Secretary of Defense for Health Affairs and others; BrainGate2 ClinicalTrials.gov number, NCT00912041.).
脑机接口可以通过将与意图说话相关的皮层活动转化为计算机屏幕上的文本,从而使瘫痪患者实现交流。脑机接口的交流受到广泛的训练要求和有限的准确性的限制。
一名 45 岁的肌萎缩侧索硬化症(ALS)患者,四肢瘫痪且严重构音障碍,在疾病发作 5 年后接受了手术,将四个微电极阵列植入其左侧腹侧前中央回;这些阵列记录了 256 个皮层内电极的神经活动。我们报告了在提示和非结构化对话语境中,解码他皮层神经活动的结果,因为他试图说话。解码后的单词显示在屏幕上,然后使用旨在听起来像他 ALS 前声音的语音合成软件进行发声。
在使用的第一天(手术后 25 天),神经假体以 50 字词汇量达到 99.6%的准确率。神经假体的校准需要 30 分钟的皮层记录,参与者尝试说话,然后进行后续处理。第二天,在系统训练增加 1.4 个小时后,神经假体使用 125000 字词汇量达到 90.2%的准确率。在进一步的训练数据中,神经假体在手术后 8.4 个月内持续达到 97.5%的准确率,参与者使用它以大约每分钟 32 个单词的速度进行自我控制的对话,累计超过 248 个小时。
在患有 ALS 和严重构音障碍的患者中,皮层内语音神经假体在经过短暂的训练后达到了恢复对话交流的性能水平。(由国防部卫生事务助理部长办公室等资助;BrainGate2 临床试验.gov 编号,NCT00912041。)。