Suppr超能文献

基于自适应频率-时间注意力变换器的自监督学习用于癫痫发作预测与分类

Self-Supervised Learning with Adaptive Frequency-Time Attention Transformer for Seizure Prediction and Classification.

作者信息

Huang Yajin, Chen Yuncan, Xu Shimin, Wu Dongyan, Wu Xunyi

机构信息

Department of Neurology, Huashan Hospital, Fudan University, Shanghai 200040, China.

出版信息

Brain Sci. 2025 Apr 7;15(4):382. doi: 10.3390/brainsci15040382.

Abstract

BACKGROUND

In deep learning-based epilepsy prediction and classification, enhancing the extraction of electroencephalogram (EEG) features is crucial for improving model accuracy. Traditional supervised learning methods rely on large, detailed annotated datasets, limiting the feasibility of large-scale training. Recently, self-supervised learning approaches using masking-and-reconstruction strategies have emerged, reducing dependence on labeled data. However, these methods are vulnerable to inherent noise and signal degradation in EEG data, which diminishes feature extraction robustness and overall model performance.

METHODS

In this study, we proposed a self-supervised learning Transformer network enhanced with Adaptive Frequency-Time Attention (AFTA) for learning robust EEG feature representations from unlabeled data, utilizing a masking-and-reconstruction framework. Specifically, we pretrained the Transformer network using a self-supervised learning approach, and subsequently fine-tuned the pretrained model for downstream tasks like seizure prediction and classification. To mitigate the impact of inherent noise in EEG signals and enhance feature extraction capabilities, we incorporated AFTA into the Transformer architecture. AFTA incorporates an Adaptive Frequency Filtering Module (AFFM) to perform adaptive global and local filtering in the frequency domain. This module was then integrated with temporal attention mechanisms, enhancing the model's self-supervised learning capabilities.

RESULT

Our method achieved exceptional performance in EEG analysis tasks. Our method consistently outperformed state-of-the-art approaches across TUSZ, TUAB, and TUEV datasets, achieving the highest AUROC (0.891), balanced accuracy (0.8002), weighted F1-score (0.8038), and Cohen's kappa (0.6089). These results validate its robustness, generalization, and effectiveness in seizure detection and classification tasks on diverse EEG datasets.

摘要

背景

在基于深度学习的癫痫预测和分类中,增强脑电图(EEG)特征提取对于提高模型准确性至关重要。传统的监督学习方法依赖于大规模、详细标注的数据集,限制了大规模训练的可行性。最近,使用掩码和重建策略的自监督学习方法出现了,减少了对标记数据的依赖。然而,这些方法容易受到EEG数据中固有噪声和信号退化的影响,这会降低特征提取的鲁棒性和整体模型性能。

方法

在本研究中,我们提出了一种通过自适应频率-时间注意力(AFTA)增强的自监督学习Transformer网络,用于从未标记数据中学习鲁棒的EEG特征表示,采用掩码和重建框架。具体而言,我们使用自监督学习方法对Transformer网络进行预训练,随后针对癫痫发作预测和分类等下游任务对预训练模型进行微调。为了减轻EEG信号中固有噪声的影响并增强特征提取能力,我们将AFTA纳入Transformer架构。AFTA包含一个自适应频率滤波模块(AFFM),用于在频域中执行自适应全局和局部滤波。然后将该模块与时间注意力机制集成,增强模型的自监督学习能力。

结果

我们的方法在EEG分析任务中取得了优异的性能。我们的方法在TUSZ、TUAB和TUEV数据集上始终优于现有方法,实现了最高的曲线下面积(AUROC,0.891)、平衡准确率(0.8002)、加权F1分数(0.8038)和科恩kappa系数(0.6089)。这些结果验证了其在不同EEG数据集上癫痫发作检测和分类任务中的鲁棒性、泛化能力和有效性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c63e/12025975/6c8d373c9590/brainsci-15-00382-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验