Zhu Xianglong, Meng Ming, Yan Zewen, Luo Zhizeng
School of Automation, Hangzhou Dianzi University, Hangzhou 310018, China.
Brain Sci. 2025 Jan 7;15(1):50. doi: 10.3390/brainsci15010050.
Decoding motor intentions from electroencephalogram (EEG) signals is a critical component of motor imagery-based brain-computer interface (MI-BCIs). In traditional EEG signal classification, effectively utilizing the valuable information contained within the electroencephalogram is crucial.
To further optimize the use of information from various domains, we propose a novel framework based on multi-domain feature rotation transformation and stacking ensemble for classifying MI tasks.
Initially, we extract the features of Time Domain, Frequency domain, Time-Frequency domain, and Spatial Domain from the EEG signals, and perform feature selection for each domain to identify significant features that possess strong discriminative capacity. Subsequently, local rotation transformations are applied to the significant feature set to generate a rotated feature set, enhancing the representational capacity of the features. Next, the rotated features were fused with the original significant features from each domain to obtain composite features for each domain. Finally, we employ a stacking ensemble approach, where the prediction results of base classifiers corresponding to different domain features and the set of significant features undergo linear discriminant analysis for dimensionality reduction, yielding discriminative feature integration as input for the meta-classifier for classification.
The proposed method achieves average classification accuracies of 92.92%, 89.13%, and 86.26% on the BCI Competition III Dataset IVa, BCI Competition IV Dataset I, and BCI Competition IV Dataset 2a, respectively.
Experimental results show that the method proposed in this paper outperforms several existing MI classification methods, such as the Common Time-Frequency-Spatial Patterns and the Selective Extract of the Multi-View Time-Frequency Decomposed Spatial, in terms of classification accuracy and robustness.
从脑电图(EEG)信号中解码运动意图是基于运动想象的脑机接口(MI-BCI)的关键组成部分。在传统的EEG信号分类中,有效利用脑电图中包含的有价值信息至关重要。
为了进一步优化来自各个领域信息的使用,我们提出了一种基于多领域特征旋转变换和堆叠集成的新颖框架来对MI任务进行分类。
首先,我们从EEG信号中提取时域、频域、时频域和空间域的特征,并对每个领域进行特征选择,以识别具有强判别能力的显著特征。随后,对显著特征集应用局部旋转变换以生成旋转特征集,增强特征的表示能力。接下来,将旋转特征与每个领域的原始显著特征融合,以获得每个领域的复合特征。最后,我们采用堆叠集成方法,其中对应于不同领域特征和显著特征集的基分类器的预测结果进行线性判别分析以进行降维,产生判别性特征集成作为元分类器分类的输入。
所提出的方法在BCI竞赛III数据集IVa、BCI竞赛IV数据集I和BCI竞赛IV数据集2a上分别实现了92.92%、89.13%和86.26%的平均分类准确率。
实验结果表明,本文提出的方法在分类准确率和鲁棒性方面优于几种现有的MI分类方法,如共同的时频空间模式和多视图时频分解空间的选择性提取。