Department of Computer Science and Engineering, National Institute of Technology Thiruchirappalli, Tamil Nadu 620015, India.
Behav Brain Res. 2025 Feb 4;477:115295. doi: 10.1016/j.bbr.2024.115295. Epub 2024 Oct 18.
The electroencephalogram (EEG) based brain-computer interface (BCI) system employing imagined speech serves as a mechanism for decoding EEG signals to facilitate control over external devices or communication with the external world at the moment the user desires. To effectively deploy such BCIs, it is imperative to accurately discern various brain states from continuous EEG signals when users initiate word imagination.
This study involved the acquisition of EEG signals from 15 subjects engaged in four states: resting, listening, imagined speech, and actual speech, each involving a predefined set of 10 words. The EEG signals underwent preprocessing, segmentation, spatio-temporal and spectral analysis of each state, and functional connectivity analysis using the phase locking value (PLV) method. Subsequently, five features were extracted from the frequency and time-frequency domains. Classification tasks were performed using four machine learning algorithms in both pair-wise and multiclass scenarios, considering subject-dependent and subject-independent data.
In the subject-dependent scenario, the random forest (RF) classifier achieved a maximum accuracy of 94.60 % for pairwise classification, while the artificial neural network (ANN) classifier achieved a maximum accuracy of 66.92 % for multiclass classification. In the subject-independent scenario, the random forest (RF) classifier achieved maximum accuracies of 81.02 % for pairwise classification and 55.58 % for multiclass classification. Moreover, EEG signals were classified based on frequency bands and brain lobes, revealing that the theta (θ) and delta (δ) bands, as well as the frontal and temporal lobes, are sufficient for distinguishing between brain states.
The findings promise to develop a system capable of automatically segmenting imagined speech segments from continuous EEG signals.
基于想象言语的脑电图(EEG)脑机接口(BCI)系统作为一种从 EEG 信号解码的机制,可在用户想要的时刻促进对外部设备的控制或与外部世界进行通信。为了有效地部署这种 BCI,当用户开始想象单词时,从连续的 EEG 信号中准确地辨别各种大脑状态是至关重要的。
本研究从 15 名参与四个状态的受试者中采集 EEG 信号:休息、聆听、想象言语和实际言语,每个状态都涉及一组预定义的 10 个单词。EEG 信号经过预处理、分段、每个状态的时空和频谱分析,以及使用锁相值(PLV)方法的功能连接分析。随后,从频域和时频域中提取了五个特征。在基于个体和独立于个体的数据的情况下,使用四种机器学习算法在两两和多类场景中进行分类任务。
在基于个体的场景中,随机森林(RF)分类器在两两分类中的最大准确率为 94.60%,而人工神经网络(ANN)分类器在多类分类中的最大准确率为 66.92%。在独立于个体的场景中,随机森林(RF)分类器在两两分类中的最大准确率为 81.02%,在多类分类中的最大准确率为 55.58%。此外,根据频带和脑叶对 EEG 信号进行了分类,结果表明θ和δ带以及额叶和颞叶足以区分大脑状态。
这些发现有望开发出一种能够自动从连续 EEG 信号中分割想象言语段的系统。