• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

野外自监督预训练赋予医学图像变换器图像采集鲁棒性:在肺癌分割中的应用

Self-supervised pretraining in the wild imparts image acquisition robustness to medical image transformers: an application to lung cancer segmentation.

作者信息

Jiang Jue, Veeraraghavan Harini

机构信息

Memorial Sloan Kettering Cancer Center.

出版信息

Proc Mach Learn Res. 2024 Jul;250:708-721.

PMID:39831171
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11741178/
Abstract

Self-supervised learning (SSL) is an approach to pretrain models with unlabeled datasets and extract useful feature representations such that these models can be easily fine-tuned for various downstream tasks. Self-pretraining applies SSL on curated task-specific datasets without using task-specific labels. Increasing availability of public data repositories has now made it possible to utilize diverse and large, task unrelated datasets to pretrain models in the "wild" using SSL. However, the benefit of such wild-pretraining over self-pretraining has not been studied in the context of medical image analysis. Hence, we analyzed transformers (Swin and ViT) and a convolutional neural network created using wild- and self-pretraining trained to segment lung tumors from 3D-computed tomography (CT) scans in terms of: (a) accuracy, (b) fine-tuning epoch efficiency, and (c) robustness to image acquisition differences (contrast versus non-contrast, slice thickness, and image reconstruction kernels). We also studied feature reuse using centered kernel alignment (CKA) with the Swin networks. Our analysis with two independent testing (public N = 139; internal N = 196) datasets showed that wild-pretrained Swin models significantly outperformed self-pretrained Swin for the various imaging acquisitions. Fine-tuning epoch efficiency was higher for both wild-pretrained Swin and ViT models compared to their self-pretrained counterparts. Feature reuse close to the final encoder layers was lower than in the early layers for wild-pretrained models irrespective of the pretext tasks used in SSL. Models and code will be made available through GitHub upon manuscript acceptance.

摘要

自监督学习(SSL)是一种利用未标记数据集对模型进行预训练并提取有用特征表示的方法,这样这些模型就可以很容易地针对各种下游任务进行微调。自预训练是在经过整理的特定任务数据集上应用SSL,而不使用特定任务的标签。公共数据存储库可用性的提高,现在使得利用多样且大型的、与任务无关的数据集,通过SSL在“自然状态”下对模型进行预训练成为可能。然而,在医学图像分析的背景下,这种自然预训练相对于自预训练的优势尚未得到研究。因此,我们分析了变换器(Swin和ViT)以及一个使用自然预训练和自预训练创建的卷积神经网络,这些模型经过训练,用于从3D计算机断层扫描(CT)图像中分割肺肿瘤,分析指标包括:(a)准确性,(b)微调轮次效率,以及(c)对图像采集差异(对比剂与非对比剂、切片厚度和图像重建内核)的鲁棒性。我们还使用Swin网络通过中心核对齐(CKA)研究了特征重用。我们对两个独立测试(公共数据集N = 139;内部数据集N = 196)数据集的分析表明,对于各种成像采集,自然预训练的Swin模型显著优于自预训练的Swin模型。与自预训练的对应模型相比,自然预训练的Swin和ViT模型的微调轮次效率更高。无论SSL中使用的 pretext 任务如何,自然预训练模型在靠近最终编码器层的特征重用低于早期层。论文被接受后,模型和代码将通过GitHub提供。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dc75/11741178/cb6dd58d4725/nihms-2026189-f0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dc75/11741178/14ff93f54f87/nihms-2026189-f0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dc75/11741178/c0ddcabe324e/nihms-2026189-f0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dc75/11741178/b2efda358d5d/nihms-2026189-f0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dc75/11741178/08ec88721f80/nihms-2026189-f0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dc75/11741178/cb6dd58d4725/nihms-2026189-f0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dc75/11741178/14ff93f54f87/nihms-2026189-f0001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dc75/11741178/c0ddcabe324e/nihms-2026189-f0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dc75/11741178/b2efda358d5d/nihms-2026189-f0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dc75/11741178/08ec88721f80/nihms-2026189-f0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/dc75/11741178/cb6dd58d4725/nihms-2026189-f0005.jpg

相似文献

1
Self-supervised pretraining in the wild imparts image acquisition robustness to medical image transformers: an application to lung cancer segmentation.野外自监督预训练赋予医学图像变换器图像采集鲁棒性:在肺癌分割中的应用
Proc Mach Learn Res. 2024 Jul;250:708-721.
2
Self-supervised learning improves robustness of deep learning lung tumor segmentation models to CT imaging differences.自监督学习提高了深度学习肺肿瘤分割模型对CT成像差异的鲁棒性。
Med Phys. 2025 Mar;52(3):1573-1588. doi: 10.1002/mp.17541. Epub 2024 Dec 5.
3
Improving Data-Efficiency and Robustness of Medical Imaging Segmentation Using Inpainting-Based Self-Supervised Learning.使用基于图像修复的自监督学习提高医学图像分割的数据效率和鲁棒性
Bioengineering (Basel). 2023 Feb 4;10(2):207. doi: 10.3390/bioengineering10020207.
4
Transformer-based unsupervised contrastive learning for histopathological image classification.基于 Transformer 的无监督对比学习在组织病理学图像分类中的应用。
Med Image Anal. 2022 Oct;81:102559. doi: 10.1016/j.media.2022.102559. Epub 2022 Jul 30.
5
Stepwise incremental pretraining for integrating discriminative, restorative, and adversarial learning.用于整合判别式、恢复性和对抗性学习的逐步增量式预训练。
Med Image Anal. 2024 Jul;95:103159. doi: 10.1016/j.media.2024.103159. Epub 2024 Apr 16.
6
Self-supervised 3D anatomy segmentation using self-distilled masked image transformer (SMIT).使用自蒸馏掩码图像变换器(SMIT)的自监督3D解剖分割。
Med Image Comput Comput Assist Interv. 2022 Sep;13434:556-566. doi: 10.1007/978-3-031-16440-8_53. Epub 2022 Sep 16.
7
Why does my medical AI look at pictures of birds? Exploring the efficacy of transfer learning across domain boundaries.为什么我的医学人工智能要看鸟类的图片?探索跨领域边界的迁移学习的功效。
Comput Methods Programs Biomed. 2025 Apr;261:108634. doi: 10.1016/j.cmpb.2025.108634. Epub 2025 Jan 31.
8
Large-scale benchmarking and boosting transfer learning for medical image analysis.用于医学图像分析的大规模基准测试与增强迁移学习
Med Image Anal. 2025 May;102:103487. doi: 10.1016/j.media.2025.103487. Epub 2025 Feb 21.
9
Automated segmentation of lesions and organs at risk on [Ga]Ga-PSMA-11 PET/CT images using self-supervised learning with Swin UNETR.使用基于 Swin UNETR 的自监督学习对 [Ga]Ga-PSMA-11 PET/CT 图像上的病变和危险器官进行自动分割。
Cancer Imaging. 2024 Feb 29;24(1):30. doi: 10.1186/s40644-024-00675-x.
10
Leveraging Pretrained Transformers for Efficient Segmentation and Lesion Detection in Cone-Beam Computed Tomography Scans.利用预训练的转换器在锥形束计算机断层扫描中进行高效的分割和病变检测。
J Endod. 2024 Oct;50(10):1505-1514.e1. doi: 10.1016/j.joen.2024.07.012. Epub 2024 Aug 2.

引用本文的文献

1
BENCHMARKING TRANSFERABILITY OF SELF-SUPERVISED PRETRAINING FOR MULTI-ORGAN SEGMENTATION ON DIFFERENT MODALITIES.不同模态下多器官分割的自监督预训练的基准迁移性研究
Proc IEEE Int Symp Biomed Imaging. 2025 Apr;2025. doi: 10.1109/isbi60581.2025.10980778. Epub 2025 May 12.

本文引用的文献

1
A Unified Visual Information Preservation Framework for Self-supervised Pre-Training in Medical Image Analysis.用于医学图像分析中自监督预训练的统一视觉信息保留框架。
IEEE Trans Pattern Anal Mach Intell. 2023 Jul;45(7):8020-8035. doi: 10.1109/TPAMI.2023.3234002. Epub 2023 Jun 5.
2
Self-supervised 3D anatomy segmentation using self-distilled masked image transformer (SMIT).使用自蒸馏掩码图像变换器(SMIT)的自监督3D解剖分割。
Med Image Comput Comput Assist Interv. 2022 Sep;13434:556-566. doi: 10.1007/978-3-031-16440-8_53. Epub 2022 Sep 16.
3
A Systematic Benchmarking Analysis of Transfer Learning for Medical Image Analysis.
医学图像分析中迁移学习的系统基准分析
Domain Adapt Represent Transf Afford Healthc AI Resour Divers Glob Health (2021). 2021 Sep-Oct;12968:3-13. doi: 10.1007/978-3-030-87722-4_1. Epub 2021 Sep 21.
4
Unpaired Cross-Modality Educed Distillation (CMEDL) for Medical Image Segmentation.用于医学图像分割的非配对跨模态导出蒸馏(CMEDL)
IEEE Trans Med Imaging. 2022 May;41(5):1057-1068. doi: 10.1109/TMI.2021.3132291. Epub 2022 May 2.
5
Models Genesis.模型起源。
Med Image Anal. 2021 Jan;67:101840. doi: 10.1016/j.media.2020.101840. Epub 2020 Oct 13.
6
Artificial intelligence for the detection of COVID-19 pneumonia on chest CT using multinational datasets.利用多国数据集的人工智能检测胸部 CT 中的 COVID-19 肺炎。
Nat Commun. 2020 Aug 14;11(1):4080. doi: 10.1038/s41467-020-17971-2.
7
Rubik's Cube+: A self-supervised feature learning framework for 3D medical image analysis.魔方+: 一种用于 3D 医学图像分析的自监督特征学习框架。
Med Image Anal. 2020 Aug;64:101746. doi: 10.1016/j.media.2020.101746. Epub 2020 Jun 6.
8
Author Correction: Imaging and clinical data archive for head and neck squamous cell carcinoma patients treated with radiotherapy.作者更正:接受放射治疗的头颈部鳞状细胞癌患者的影像和临床数据存档
Sci Data. 2018 Nov 27;5(1):1. doi: 10.1038/s41597-018-0002-5.
9
A radiogenomic dataset of non-small cell lung cancer.非小细胞肺癌的放射基因组数据集。
Sci Data. 2018 Oct 16;5:180202. doi: 10.1038/sdata.2018.202.