Suppr超能文献

用于多模态睡眠分期的带有域对抗学习的多级跨模态和模态内Transformer网络

Multilevel Inter-modal and Intra-modal Transformer network with domain adversarial learning for multimodal sleep staging.

作者信息

He Yang-Yang, Liu Jian-Wei

机构信息

College of Artificial Intelligence, China University of Petroleum Beijing, Beijing, 102249 China.

出版信息

Cogn Neurodyn. 2025 Dec;19(1):80. doi: 10.1007/s11571-025-10262-w. Epub 2025 May 26.

Abstract

Sleep staging identification is a fundamental task for the diagnosis of sleep disorders. With the development of biosensing technology and deep learning technology, it is possible to automatically decode sleep process through electroencephalogram signals. However, most sleep staging methods do not consider multimodal sleep signals such as electroencephalogram and electrooculograms signals simultaneously for sleep staging due to the limitation of performance improvement. To this regard, we design a Multilevel Inter-modal and Intra-modal Transformer network with domain adversarial learning for multimodal sleep staging, we introduce a multilevel Transformer structure to fully capture the temporal dependencies within sleep signals of each modality and the interdependencies among different modalities. Simultaneously, we strive for the multi-scale CNNs to learn time and frequency features separately. Our research promotes the application of Transformer models in the field of sleep staging identification. Moreover, considering individual differences among subjects, models trained on one group's data often perform poorly when applied to another group, known as the domain generalization problem. While domain adaptation methods are commonly used, fine-tuning on the target domain each time is cumbersome and impractical. To effectively address these issues without using target domain information, we introduce domain adversarial learning to help the model learn domain-invariant features for better generalization across domains. We validated the superiority of our model on two commonly used datasets, significantly outperforming other baseline models. Our model efficiently extracts dependencies of intra-modal level and inter-modal level from multimodal sleep data, making it suitable for scenarios requiring high accuracy.

摘要

睡眠阶段识别是睡眠障碍诊断的一项基本任务。随着生物传感技术和深度学习技术的发展,通过脑电图信号自动解码睡眠过程成为可能。然而,由于性能提升的限制,大多数睡眠阶段划分方法并未同时考虑多模态睡眠信号,如脑电图和眼电图信号来进行睡眠阶段划分。对此,我们设计了一种具有域对抗学习的多模态睡眠阶段划分的多级跨模态和模态内Transformer网络,引入多级Transformer结构以充分捕捉各模态睡眠信号内的时间依赖性以及不同模态之间的相互依赖性。同时,我们利用多尺度卷积神经网络分别学习时间和频率特征。我们的研究推动了Transformer模型在睡眠阶段识别领域的应用。此外,考虑到个体之间的差异,在一组数据上训练的模型应用于另一组数据时通常表现不佳,这就是所谓的域泛化问题。虽然常用域适应方法,但每次在目标域上微调既繁琐又不切实际。为了在不使用目标域信息的情况下有效解决这些问题,我们引入域对抗学习来帮助模型学习域不变特征,以便在不同域之间实现更好的泛化。我们在两个常用数据集上验证了我们模型的优越性,显著优于其他基线模型。我们的模型能够从多模态睡眠数据中高效提取模态内和跨模态级别的依赖性,使其适用于需要高精度的场景。

相似文献

3
Audit and feedback: effects on professional practice.审核与反馈:对专业实践的影响
Cochrane Database Syst Rev. 2025 Mar 25;3(3):CD000259. doi: 10.1002/14651858.CD000259.pub4.

本文引用的文献

1
Toward Interpretable Sleep Stage Classification Using Cross-Modal Transformers.基于跨模态转换器的可解释睡眠阶段分类。
IEEE Trans Neural Syst Rehabil Eng. 2024;32:2893-2904. doi: 10.1109/TNSRE.2024.3438610. Epub 2024 Aug 14.
2
A Review on Automated Sleep Study.关于自动睡眠研究的综述。
Ann Biomed Eng. 2024 Jun;52(6):1463-1491. doi: 10.1007/s10439-024-03486-0. Epub 2024 Mar 16.
8
RobustSleepNet: Transfer Learning for Automated Sleep Staging at Scale.RobustSleepNet:大规模自动睡眠分期的迁移学习。
IEEE Trans Neural Syst Rehabil Eng. 2021;29:1441-1451. doi: 10.1109/TNSRE.2021.3098968. Epub 2021 Jul 27.
10
XSleepNet: Multi-View Sequential Model for Automatic Sleep Staging.XSleepNet:用于自动睡眠分期的多视图序列模型。
IEEE Trans Pattern Anal Mach Intell. 2022 Sep;44(9):5903-5915. doi: 10.1109/TPAMI.2021.3070057. Epub 2022 Aug 4.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验