• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于自适应频率-时间注意力变换器的自监督学习用于癫痫发作预测与分类

Self-Supervised Learning with Adaptive Frequency-Time Attention Transformer for Seizure Prediction and Classification.

作者信息

Huang Yajin, Chen Yuncan, Xu Shimin, Wu Dongyan, Wu Xunyi

机构信息

Department of Neurology, Huashan Hospital, Fudan University, Shanghai 200040, China.

出版信息

Brain Sci. 2025 Apr 7;15(4):382. doi: 10.3390/brainsci15040382.

DOI:10.3390/brainsci15040382
PMID:40309845
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12025975/
Abstract

BACKGROUND

In deep learning-based epilepsy prediction and classification, enhancing the extraction of electroencephalogram (EEG) features is crucial for improving model accuracy. Traditional supervised learning methods rely on large, detailed annotated datasets, limiting the feasibility of large-scale training. Recently, self-supervised learning approaches using masking-and-reconstruction strategies have emerged, reducing dependence on labeled data. However, these methods are vulnerable to inherent noise and signal degradation in EEG data, which diminishes feature extraction robustness and overall model performance.

METHODS

In this study, we proposed a self-supervised learning Transformer network enhanced with Adaptive Frequency-Time Attention (AFTA) for learning robust EEG feature representations from unlabeled data, utilizing a masking-and-reconstruction framework. Specifically, we pretrained the Transformer network using a self-supervised learning approach, and subsequently fine-tuned the pretrained model for downstream tasks like seizure prediction and classification. To mitigate the impact of inherent noise in EEG signals and enhance feature extraction capabilities, we incorporated AFTA into the Transformer architecture. AFTA incorporates an Adaptive Frequency Filtering Module (AFFM) to perform adaptive global and local filtering in the frequency domain. This module was then integrated with temporal attention mechanisms, enhancing the model's self-supervised learning capabilities.

RESULT

Our method achieved exceptional performance in EEG analysis tasks. Our method consistently outperformed state-of-the-art approaches across TUSZ, TUAB, and TUEV datasets, achieving the highest AUROC (0.891), balanced accuracy (0.8002), weighted F1-score (0.8038), and Cohen's kappa (0.6089). These results validate its robustness, generalization, and effectiveness in seizure detection and classification tasks on diverse EEG datasets.

摘要

背景

在基于深度学习的癫痫预测和分类中,增强脑电图(EEG)特征提取对于提高模型准确性至关重要。传统的监督学习方法依赖于大规模、详细标注的数据集,限制了大规模训练的可行性。最近,使用掩码和重建策略的自监督学习方法出现了,减少了对标记数据的依赖。然而,这些方法容易受到EEG数据中固有噪声和信号退化的影响,这会降低特征提取的鲁棒性和整体模型性能。

方法

在本研究中,我们提出了一种通过自适应频率-时间注意力(AFTA)增强的自监督学习Transformer网络,用于从未标记数据中学习鲁棒的EEG特征表示,采用掩码和重建框架。具体而言,我们使用自监督学习方法对Transformer网络进行预训练,随后针对癫痫发作预测和分类等下游任务对预训练模型进行微调。为了减轻EEG信号中固有噪声的影响并增强特征提取能力,我们将AFTA纳入Transformer架构。AFTA包含一个自适应频率滤波模块(AFFM),用于在频域中执行自适应全局和局部滤波。然后将该模块与时间注意力机制集成,增强模型的自监督学习能力。

结果

我们的方法在EEG分析任务中取得了优异的性能。我们的方法在TUSZ、TUAB和TUEV数据集上始终优于现有方法,实现了最高的曲线下面积(AUROC,0.891)、平衡准确率(0.8002)、加权F1分数(0.8038)和科恩kappa系数(0.6089)。这些结果验证了其在不同EEG数据集上癫痫发作检测和分类任务中的鲁棒性、泛化能力和有效性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c63e/12025975/215069a14461/brainsci-15-00382-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c63e/12025975/6c8d373c9590/brainsci-15-00382-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c63e/12025975/0bae0002a7be/brainsci-15-00382-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c63e/12025975/bd23ce7e28bf/brainsci-15-00382-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c63e/12025975/e841693d6df2/brainsci-15-00382-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c63e/12025975/c3c95a697225/brainsci-15-00382-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c63e/12025975/a4e69dc278c7/brainsci-15-00382-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c63e/12025975/972b24eeac02/brainsci-15-00382-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c63e/12025975/215069a14461/brainsci-15-00382-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c63e/12025975/6c8d373c9590/brainsci-15-00382-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c63e/12025975/0bae0002a7be/brainsci-15-00382-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c63e/12025975/bd23ce7e28bf/brainsci-15-00382-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c63e/12025975/e841693d6df2/brainsci-15-00382-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c63e/12025975/c3c95a697225/brainsci-15-00382-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c63e/12025975/a4e69dc278c7/brainsci-15-00382-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c63e/12025975/972b24eeac02/brainsci-15-00382-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/c63e/12025975/215069a14461/brainsci-15-00382-g008.jpg

相似文献

1
Self-Supervised Learning with Adaptive Frequency-Time Attention Transformer for Seizure Prediction and Classification.基于自适应频率-时间注意力变换器的自监督学习用于癫痫发作预测与分类
Brain Sci. 2025 Apr 7;15(4):382. doi: 10.3390/brainsci15040382.
2
Self-supervised learning improves robustness of deep learning lung tumor segmentation models to CT imaging differences.自监督学习提高了深度学习肺肿瘤分割模型对CT成像差异的鲁棒性。
Med Phys. 2025 Mar;52(3):1573-1588. doi: 10.1002/mp.17541. Epub 2024 Dec 5.
3
MSLTE: multiple self-supervised learning tasks for enhancing EEG emotion recognition.多任务自监督学习增强 EEG 情绪识别
J Neural Eng. 2024 Apr 17;21(2). doi: 10.1088/1741-2552/ad3c28.
4
Transformer-based unsupervised contrastive learning for histopathological image classification.基于 Transformer 的无监督对比学习在组织病理学图像分类中的应用。
Med Image Anal. 2022 Oct;81:102559. doi: 10.1016/j.media.2022.102559. Epub 2022 Jul 30.
5
SFT-SGAT: A semi-supervised fine-tuning self-supervised graph attention network for emotion recognition and consciousness detection.SFT-SGAT:一种半监督微调的自监督图注意力网络,用于情绪识别和意识检测。
Neural Netw. 2024 Dec;180:106643. doi: 10.1016/j.neunet.2024.106643. Epub 2024 Aug 22.
6
Self-training EEG discrimination model with weakly supervised sample construction: An age-based perspective on ASD evaluation.具有弱监督样本构建的自训练脑电图判别模型:基于年龄的自闭症谱系障碍评估视角
Neural Netw. 2025 Jul;187:107337. doi: 10.1016/j.neunet.2025.107337. Epub 2025 Mar 10.
7
Cross-shaped windows transformer with self-supervised pretraining for clinically significant prostate cancer detection in bi-parametric MRI.用于双参数磁共振成像中具有临床意义的前列腺癌检测的带自监督预训练的十字形窗口变换器
Med Phys. 2025 Feb;52(2):993-1004. doi: 10.1002/mp.17546. Epub 2024 Nov 26.
8
Self-Supervised Electroencephalogram Representation Learning for Automatic Sleep Staging: Model Development and Evaluation Study.用于自动睡眠分期的自监督脑电图表示学习:模型开发与评估研究
JMIR AI. 2023 Jan-Dec;2(1):e46769. doi: 10.2196/46769. Epub 2023 Jul 26.
9
Self-supervised learning framework for efficient classification of endoscopic images using pretext tasks.使用前置任务的内镜图像高效分类自监督学习框架
PLoS One. 2025 May 8;20(5):e0322028. doi: 10.1371/journal.pone.0322028. eCollection 2025.
10
Two-Stage Self-Supervised Contrastive Learning Aided Transformer for Real-Time Medical Image Segmentation.用于实时医学图像分割的两阶段自监督对比学习辅助变压器
IEEE J Biomed Health Inform. 2023 Dec 12;PP. doi: 10.1109/JBHI.2023.3340956.

本文引用的文献

1
Synchronization-based graph spatio-temporal attention network for seizure prediction.用于癫痫发作预测的基于同步的图时空注意力网络。
Sci Rep. 2025 Feb 3;15(1):4080. doi: 10.1038/s41598-025-88492-5.
2
Seizure Onset Zone Detection Based on Convolutional Neural Networks and EEG Signals.基于卷积神经网络和脑电信号的癫痫发作起始区检测
Brain Sci. 2024 Oct 29;14(11):1090. doi: 10.3390/brainsci14111090.
3
Epileptic seizure prediction via multidimensional transformer and recurrent neural network fusion.基于多维变换和递归神经网络融合的癫痫发作预测。
J Transl Med. 2024 Oct 4;22(1):895. doi: 10.1186/s12967-024-05678-7.
4
Classification of epileptic seizures in EEG data based on iterative gated graph convolution network.基于迭代门控图卷积网络的脑电图数据中癫痫发作分类
Front Comput Neurosci. 2024 Aug 29;18:1454529. doi: 10.3389/fncom.2024.1454529. eCollection 2024.
5
EEG-Based Seizure Prediction Using Hybrid DenseNet-ViT Network with Attention Fusion.基于脑电图的癫痫发作预测:使用具有注意力融合的混合密集连接网络-视觉Transformer网络
Brain Sci. 2024 Aug 21;14(8):839. doi: 10.3390/brainsci14080839.
6
Recent Advances in Pathophysiology and Therapeutic Approaches in Epilepsy.癫痫的病理生理学与治疗方法的最新进展
Brain Sci. 2024 Aug 2;14(8):785. doi: 10.3390/brainsci14080785.
7
Self-Supervised Electroencephalogram Representation Learning for Automatic Sleep Staging: Model Development and Evaluation Study.用于自动睡眠分期的自监督脑电图表示学习:模型开发与评估研究
JMIR AI. 2023 Jan-Dec;2(1):e46769. doi: 10.2196/46769. Epub 2023 Jul 26.
8
MEET: A Multi-Band EEG Transformer for Brain States Decoding.MEET:用于脑状态解码的多频带 EEG 转换器。
IEEE Trans Biomed Eng. 2024 May;71(5):1442-1453. doi: 10.1109/TBME.2023.3339892. Epub 2024 Apr 22.
9
EEGformer: A transformer-based brain activity classification method using EEG signal.EEGformer:一种基于变压器的使用脑电图信号的脑活动分类方法。
Front Neurosci. 2023 Mar 24;17:1148855. doi: 10.3389/fnins.2023.1148855. eCollection 2023.
10
EEG Conformer: Convolutional Transformer for EEG Decoding and Visualization.脑电图适配模型:用于脑电图解码与可视化的卷积变换器
IEEE Trans Neural Syst Rehabil Eng. 2023;31:710-719. doi: 10.1109/TNSRE.2022.3230250. Epub 2023 Feb 2.