Suppr超能文献

BiTS-SleepNet:一种基于注意力的单通道脑电图睡眠分期两阶段时间-频谱融合模型。

BiTS-SleepNet: An Attention-Based Two Stage Temporal-Spectral Fusion Model for Sleep Staging With Single-Channel EEG.

作者信息

Cong Zhaoyang, Zhao Minghui, Gao Hongxiang, Lou Meng, Zheng Guowei, Wang Ziyang, Wang Xingyao, Yan Chang, Ling Li, Li Jianqing, Liu Chengyu

出版信息

IEEE J Biomed Health Inform. 2025 May;29(5):3366-3376. doi: 10.1109/JBHI.2024.3523908. Epub 2025 May 6.

Abstract

Automated sleep staging is crucial for assessing sleep quality and diagnosing sleep-related diseases. Single-channel EEG has attracted significant attention due to its portability and accessibility. Most existing automated sleep staging methods often emphasize temporal information and neglect spectral information, the relationship between sleep stage contextual features, and transition rules between sleep stages. To overcome these obstacles, this paper proposes an attention-based two stage temporal-spectral fusion model (BiTS-SleepNet). The BiTS-SleepNet stage 1 network consists of a dual-stream temporal-spectral feature extractor branch and a temporal-spectral feature fusion module based on the cross-attention mechanism. These blocks are designed to autonomously extract and integrate the temporal and spectral features of EEG signals, leveraging temporal-spectral fusion information to discriminate between different sleep stages. The BiTS-SleepNet stage 2 network includes a feature context learning module (FCLM) based on Bi-GRU and a transition rules learning module (TRLM) based on the Conditional Random Field (CRF). The FCLM optimizes preliminary sleep stage results from the stage 1 network by learning dependencies between features of multiple adjacent stages. The TRLM additionally employs transition rules to optimize overall outcomes. We evaluated the BiTS-SleepNet on three public datasets: Sleep-EDF-20, Sleep-EDF-78, and SHHS, achieving accuracies of 88.50%, 85.09%, and 87.01%, respectively. The experimental results demonstrate that BiTS-SleepNet achieves competitive performance in comparison to recently published methods. This highlights its promise for practical applications.

摘要

自动睡眠分期对于评估睡眠质量和诊断睡眠相关疾病至关重要。单通道脑电图因其便携性和易获取性而备受关注。大多数现有的自动睡眠分期方法往往强调时间信息而忽略频谱信息、睡眠阶段上下文特征之间的关系以及睡眠阶段之间的转换规则。为了克服这些障碍,本文提出了一种基于注意力的两阶段时间 - 频谱融合模型(BiTS - SleepNet)。BiTS - SleepNet的第一阶段网络由双流时间 - 频谱特征提取器分支和基于交叉注意力机制的时间 - 频谱特征融合模块组成。这些模块旨在自主提取和整合脑电信号的时间和频谱特征,利用时间 - 频谱融合信息来区分不同的睡眠阶段。BiTS - SleepNet的第二阶段网络包括基于双向门控循环单元(Bi - GRU)的特征上下文学习模块(FCLM)和基于条件随机场(CRF)的转换规则学习模块(TRLM)。FCLM通过学习多个相邻阶段特征之间的依赖关系来优化第一阶段网络的初步睡眠阶段结果。TRLM还采用转换规则来优化整体结果。我们在三个公共数据集Sleep - EDF - 20、Sleep - EDF - 78和SHHS上对BiTS - SleepNet进行了评估,分别获得了88.50%、85.09%和87.01%的准确率。实验结果表明,与最近发表的方法相比,BiTS - SleepNet具有有竞争力的性能。这突出了其在实际应用中的潜力。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验