Suppr超能文献

转换运动想象分析:一种基于 AtSiftNet 方法的新型 EEG 分类框架。

Transforming Motor Imagery Analysis: A Novel EEG Classification Framework Using AtSiftNet Method.

机构信息

College of Civil Aviation, Nanjing University of Aeronautics and Astronautics, Nanjing 211106, China.

School of Automation, Northwestern Polytechnical University, Xi'an 710072, China.

出版信息

Sensors (Basel). 2024 Oct 7;24(19):6466. doi: 10.3390/s24196466.

Abstract

This paper presents an innovative approach for the Feature Extraction method using Self-Attention, incorporating various Feature Selection techniques known as the AtSiftNet method to enhance the classification performance of motor imaginary activities using electrophotography (EEG) signals. Initially, the EEG signals were sorted and then denoised using multiscale principal component analysis to obtain clean EEG signals. However, we also conducted a non-denoised experiment. Subsequently, the clean EEG signals underwent the Self-Attention feature extraction method to compute the features of each trial (i.e., 350×18). The best 1 or 15 features were then extracted through eight different feature selection techniques. Finally, five different machine learning and neural network classification models were employed to calculate the accuracy, sensitivity, and specificity of this approach. The BCI competition III dataset IV-a was utilized for all experiments, encompassing the datasets of five volunteers who participated in the competition. The experiment findings reveal that the average accuracy of classification is highest among ReliefF (i.e., 99.946%), Mutual Information (i.e., 98.902%), Independent Component Analysis (i.e., 99.62%), and Principal Component Analysis (i.e., 98.884%) for both 1 and 15 best-selected features from each trial. These accuracies were obtained for motor imagery using a Support Vector Machine (SVM) as a classifier. In addition, five-fold validation was performed in this paper to assess the fair performance estimation and robustness of the model. The average accuracy obtained through five-fold validation is 99.89%. The experiments' findings indicate that the suggested framework provides a resilient biomarker with minimal computational complexity, making it a suitable choice for advancing Motor Imagery Brain-Computer Interfaces (BCI).

摘要

本文提出了一种基于自注意力的特征提取方法,结合了各种特征选择技术,即 AtSiftNet 方法,以提高使用光电 (EEG) 信号进行运动想象活动的分类性能。首先,对 EEG 信号进行排序,然后使用多尺度主成分分析进行去噪,以获得干净的 EEG 信号。但是,我们也进行了非去噪实验。随后,干净的 EEG 信号经过自注意力特征提取方法计算每个试验的特征(即 350×18)。然后,通过八种不同的特征选择技术提取最佳的 1 个或 15 个特征。最后,采用五种不同的机器学习和神经网络分类模型来计算该方法的准确性、敏感性和特异性。所有实验均使用 BCI 竞赛 III 数据集 IV-a,其中包含参加竞赛的五名志愿者的数据集。实验结果表明,在 ReliefF(即 99.946%)、互信息(即 98.902%)、独立成分分析(即 99.62%)和主成分分析(即 98.884%)中,分类的平均准确性最高从每个试验的 1 个和 15 个最佳选择特征中。这些准确性是使用支持向量机 (SVM) 作为分类器进行运动想象获得的。此外,本文还进行了五重验证,以评估模型的公平性能估计和稳健性。通过五重验证获得的平均准确性为 99.89%。实验结果表明,所提出的框架提供了具有最小计算复杂性的弹性生物标志物,是推进运动想象脑机接口 (BCI) 的合适选择。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3124/11479282/46a929bd294c/sensors-24-06466-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验