Ahmadi Hossein, Mesin Luca
Mathematical Biology and Physiology, Department of Electronics and Telecommunications, Politecnico di Torino, 10129 Turin, Italy.
J Neural Eng. 2025 May 6;22(3). doi: 10.1088/1741-2552/add08f.
Extracting universal, task-independent semantic features from electroencephalography (EEG) signals remains an open challenge. Traditional approaches are often task-specific, limiting their generalization across different EEG paradigms. This study aims to develop a robust, unsupervised framework for learning high-level, task-independent neural representations.We propose a novel framework integrating convolutional neural networks, AutoEncoders, and Transformers to extract both low-level spatiotemporal patterns and high-level semantic features from EEG signals. The model is trained in an unsupervised manner to ensure adaptability across diverse EEG paradigms, including motor imagery (MI), steady-state visually evoked potentials (SSVEPs), and event-related potentials (ERPs, specifically P300). Extensive analyses, including clustering, correlation, and ablation studies, are conducted to validate the quality and interpretability of the extracted features.Our method achieves state-of-the-art performance, with average classification accuracies of 83.50% and 84.84% on MI datasets (BCICIV_2a and BCICIV_2b), 98.41% and 99.66% on SSVEP datasets (Lee2019-SSVEP and Nakanishi2015), and an average AUC of 91.80% across eight ERP datasets. t-distributed stochastic neighbor embedding and clustering analyses reveal that the extracted features exhibit enhanced separability and structure compared to raw EEG data. Correlation studies confirm the framework's ability to balance universal and subject-specific features, while ablation results highlight the near-optimality of the selected model configuration.This work establishes a universal framework for task-independent semantic feature extraction from EEG signals, bridging the gap between conventional feature engineering and modern deep learning methods. By providing robust, generalizable representations across diverse EEG paradigms, this approach lays the foundation for advanced brain-computer interface applications, cross-task EEG analysis, and future developments in semantic EEG processing.
从脑电图(EEG)信号中提取通用的、与任务无关的语义特征仍然是一个悬而未决的挑战。传统方法通常是特定于任务的,限制了它们在不同EEG范式中的通用性。本研究旨在开发一个强大的无监督框架,用于学习高级的、与任务无关的神经表征。我们提出了一个新颖的框架,将卷积神经网络、自动编码器和变换器集成在一起,以从EEG信号中提取低级时空模式和高级语义特征。该模型以无监督方式进行训练,以确保在包括运动想象(MI)、稳态视觉诱发电位(SSVEP)和事件相关电位(ERP,特别是P300)在内的各种EEG范式中具有适应性。进行了广泛的分析,包括聚类、相关性和消融研究,以验证提取特征的质量和可解释性。我们的方法取得了领先的性能,在MI数据集(BCICIV_2a和BCICIV_2b)上的平均分类准确率分别为83.50%和84.84%,在SSVEP数据集(Lee2019 - SSVEP和Nakanishi2015)上为98.41%和99.66%,在八个ERP数据集上的平均AUC为91.80%。t分布随机邻域嵌入和聚类分析表明,与原始EEG数据相比,提取的特征具有更强的可分离性和结构。相关性研究证实了该框架平衡通用特征和个体特定特征的能力,而消融结果突出了所选模型配置的近乎最优性。这项工作建立了一个从EEG信号中提取与任务无关的语义特征的通用框架,弥合了传统特征工程与现代深度学习方法之间的差距。通过在各种EEG范式中提供强大的、可推广的表征,这种方法为先进的脑机接口应用、跨任务EEG分析以及语义EEG处理的未来发展奠定了基础。