• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

GO-MAE:通过掩码自动编码器进行自监督预训练用于妇科OCT图像分类

GO-MAE: Self-supervised pre-training via masked autoencoder for OCT image classification of gynecology.

作者信息

Wang Haoran, Guo Xinyu, Song Kaiwen, Sun Mingyang, Shao Yanbin, Xue Songfeng, Zhang Hongwei, Zhang Tianyu

机构信息

Key Laboratory of Geophysical Exploration Equipment, Ministry of Education, College of Instrumentation and Electrical Engineering, Jilin University, Changchun 130012, China.

出版信息

Neural Netw. 2025 Jan;181:106817. doi: 10.1016/j.neunet.2024.106817. Epub 2024 Oct 18.

DOI:10.1016/j.neunet.2024.106817
PMID:39500244
Abstract

Genitourinary syndrome of menopause (GSM) is a physiological disorder caused by reduced levels of oestrogen in menopausal women. Gradually, its symptoms worsen with age and prolonged menopausal status, which gravely impacts the quality of life as well as the physical and mental health of the patients. In this regard, optical coherence tomography (OCT) system effectively reduces the patient's burden in clinical diagnosis with its noncontact, noninvasive tomographic imaging process. Consequently, supervised computer vision models applied on OCT images have yielded excellent results for disease diagnosis. However, manual labeling on an extensive number of medical images is expensive and time-consuming. To this end, this paper proposes GO-MAE, a pretraining framework for self-supervised learning of GSM OCT images based on Masked Autoencoder (MAE). To the best of our knowledge, this is the first study that applies self-supervised learning methods on the field of GSM disease screening. Focusing on the semantic complexity and feature sparsity of GSM OCT images, the objective of this study is two-pronged: first, a dynamic masking strategy is introduced for OCT characteristics in downstream tasks. This method can reduce the interference of invalid features on the model and shorten the training time. In the encoder design of MAE, we propose a convolutional neural network and transformer parallel network architecture (C&T), which aims to fuse the local and global representations of the relevant lesions in an interactive manner such that the model can still learn the richer differences between the feature information without labels. Thereafter, a series of experimental results on the acquired GSM-OCT dataset revealed that GO-MAE yields significant improvements over existing state-of-the-art techniques. Furthermore, the superiority of the model in terms of robustness and interpretability was verified through a series of comparative experiments and visualization operations, which consequently demonstrated its great potential for screening GSM symptoms.

摘要

更年期泌尿生殖综合征(GSM)是一种由更年期女性雌激素水平降低引起的生理紊乱。随着年龄增长和更年期状态的延长,其症状会逐渐加重,严重影响患者的生活质量以及身心健康。在这方面,光学相干断层扫描(OCT)系统凭借其非接触、非侵入性的断层成像过程,有效减轻了患者在临床诊断中的负担。因此,应用于OCT图像的监督式计算机视觉模型在疾病诊断方面取得了优异成果。然而,对大量医学图像进行人工标注既昂贵又耗时。为此,本文提出了GO-MAE,这是一种基于掩码自动编码器(MAE)对GSM OCT图像进行自监督学习的预训练框架。据我们所知,这是第一项在GSM疾病筛查领域应用自监督学习方法的研究。针对GSM OCT图像的语义复杂性和特征稀疏性,本研究的目标有两个方面:第一,为下游任务中的OCT特征引入动态掩码策略。该方法可以减少无效特征对模型的干扰,缩短训练时间。在MAE的编码器设计中,我们提出了一种卷积神经网络和Transformer并行网络架构(C&T),旨在以交互方式融合相关病变的局部和全局表示,使模型在没有标签的情况下仍能学习到更丰富的特征信息差异。此后,在获取的GSM-OCT数据集上进行的一系列实验结果表明,GO-MAE比现有的最先进技术有显著改进。此外,通过一系列对比实验和可视化操作验证了该模型在鲁棒性和可解释性方面的优越性,从而证明了其在筛查GSM症状方面的巨大潜力。

相似文献

1
GO-MAE: Self-supervised pre-training via masked autoencoder for OCT image classification of gynecology.GO-MAE:通过掩码自动编码器进行自监督预训练用于妇科OCT图像分类
Neural Netw. 2025 Jan;181:106817. doi: 10.1016/j.neunet.2024.106817. Epub 2024 Oct 18.
2
Cross-Attention Based Multi-Resolution Feature Fusion Model for Self-Supervised Cervical OCT Image Classification.基于交叉注意力的多分辨率特征融合模型用于自监督宫颈光学相干断层扫描图像分类
IEEE/ACM Trans Comput Biol Bioinform. 2023 Jul-Aug;20(4):2541-2554. doi: 10.1109/TCBB.2023.3246979. Epub 2023 Aug 9.
3
Cervical OCT image classification using contrastive masked autoencoders with Swin Transformer.使用带有Swin Transformer的对比掩码自动编码器进行宫颈光学相干断层扫描(OCT)图像分类
Comput Med Imaging Graph. 2024 Dec;118:102469. doi: 10.1016/j.compmedimag.2024.102469. Epub 2024 Nov 19.
4
RVM-GSM: Classification of OCT Images of Genitourinary Syndrome of Menopause Based on Integrated Model of Local-Global Information Pattern.RVM-GSM:基于局部-全局信息模式集成模型的绝经后泌尿生殖综合征OCT图像分类
Bioengineering (Basel). 2023 Apr 6;10(4):450. doi: 10.3390/bioengineering10040450.
5
Self-supervised learning improves robustness of deep learning lung tumor segmentation models to CT imaging differences.自监督学习提高了深度学习肺肿瘤分割模型对CT成像差异的鲁棒性。
Med Phys. 2025 Mar;52(3):1573-1588. doi: 10.1002/mp.17541. Epub 2024 Dec 5.
6
Point based weakly semi-supervised biomarker detection with cross-scale and label assignment in retinal OCT images.基于点的视网膜 OCT 图像跨尺度和标签分配弱半监督生物标志物检测。
Comput Methods Programs Biomed. 2024 Jun;251:108229. doi: 10.1016/j.cmpb.2024.108229. Epub 2024 May 15.
7
Cervical optical coherence tomography image classification based on contrastive self-supervised texture learning.基于对比自监督纹理学习的宫颈光学相干断层成像图像分类。
Med Phys. 2022 Jun;49(6):3638-3653. doi: 10.1002/mp.15630. Epub 2022 Apr 13.
8
Stitched vision transformer for age-related macular degeneration detection using retinal optical coherence tomography images.基于视网膜光学相干断层扫描图像的老年性黄斑变性检测用缝合视觉Transformer。
PLoS One. 2024 Jun 5;19(6):e0304943. doi: 10.1371/journal.pone.0304943. eCollection 2024.
9
Self-supervised patient-specific features learning for OCT image classification.用于光学相干断层扫描(OCT)图像分类的自监督患者特异性特征学习
Med Biol Eng Comput. 2022 Oct;60(10):2851-2863. doi: 10.1007/s11517-022-02627-8. Epub 2022 Aug 5.
10
Global-Local Transformer Network for Automatic Retinal Pathological Fluid Segmentation in Optical Coherence Tomography Images.用于光学相干断层扫描图像中视网膜病理性液体自动分割的全局-局部Transformer网络
Comput Methods Programs Biomed. 2025 Jun;266:108772. doi: 10.1016/j.cmpb.2025.108772. Epub 2025 Apr 10.