Department of Neurology, The University of Texas at Austin.
Helen DeVos Children's Hospital, Corewell Health, Grand Rapids, MI.
J Speech Lang Hear Res. 2024 Nov 7;67(11):4216-4225. doi: 10.1044/2024_JSLHR-24-00046. Epub 2024 Aug 6.
The aim of this study was to decode intended and overt speech from neuromagnetic signals while the participants performed spontaneous overt speech tasks without cues or prompts (stimuli).
Magnetoencephalography (MEG), a noninvasive neuroimaging technique, was used to collect neural signals from seven healthy adult English speakers performing spontaneous, overt speech tasks. The participants randomly spoke the words yes or no at a self-paced rate without cues. Two machine learning models, namely, linear discriminant analysis (LDA) and one-dimensional convolutional neural network (1D CNN), were employed to classify the two words from the recorded MEG signals.
LDA and 1D CNN achieved average decoding accuracies of 79.02% and 90.40%, respectively, in decoding overt speech, significantly surpassing the chance level (50%). The accuracy for decoding intended speech was 67.19% using 1D CNN.
This study showcases the possibility of decoding spontaneous overt and intended speech directly from neural signals in the absence of perceptual interference. We believe that these findings make a steady step toward the future spontaneous speech-based brain-computer interface.
本研究旨在解码参与者在无提示或提示(刺激)的情况下进行自发言语任务时的意图和明显言语的神经磁信号。
使用脑磁图(MEG),一种非侵入性的神经影像学技术,从七名健康的英语母语者在进行自发、明显的言语任务时收集神经信号。参与者以自我调节的速度随机说出是或否,无需提示。两种机器学习模型,即线性判别分析(LDA)和一维卷积神经网络(1D CNN),被用于从记录的 MEG 信号中对这两个词进行分类。
LDA 和 1D CNN 在解码明显言语方面的平均解码准确率分别为 79.02%和 90.40%,显著超过了随机水平(50%)。使用 1D CNN 解码意图言语的准确率为 67.19%。
本研究展示了在没有感知干扰的情况下,直接从神经信号中解码自发明显言语和意图言语的可能性。我们相信这些发现为未来基于自发言语的脑机接口迈出了坚实的一步。