Suppr超能文献

用于整合判别式、恢复性和对抗性学习的逐步增量式预训练。

Stepwise incremental pretraining for integrating discriminative, restorative, and adversarial learning.

作者信息

Guo Zuwei, Islam Nahid Ul, Gotway Michael B, Liang Jianming

机构信息

Arizona State University, Tempe, AZ 85281, USA.

Mayo Clinic, Scottsdale, AZ 85259, USA.

出版信息

Med Image Anal. 2024 Jul;95:103159. doi: 10.1016/j.media.2024.103159. Epub 2024 Apr 16.

Abstract

We have developed a United framework that integrates three self-supervised learning (SSL) ingredients (discriminative, restorative, and adversarial learning), enabling collaborative learning among the three learning ingredients and yielding three transferable components: a discriminative encoder, a restorative decoder, and an adversary encoder. To leverage this collaboration, we redesigned nine prominent self-supervised methods, including Rotation, Jigsaw, Rubik's Cube, Deep Clustering, TransVW, MoCo, BYOL, PCRL, and Swin UNETR, and augmented each with its missing components in a United framework for 3D medical imaging. However, such a United framework increases model complexity, making 3D pretraining difficult. To overcome this difficulty, we propose stepwise incremental pretraining, a strategy that unifies the pretraining, in which a discriminative encoder is first trained via discriminative learning, the pretrained discriminative encoder is then attached to a restorative decoder, forming a skip-connected encoder-decoder, for further joint discriminative and restorative learning. Last, the pretrained encoder-decoder is associated with an adversarial encoder for final full discriminative, restorative, and adversarial learning. Our extensive experiments demonstrate that the stepwise incremental pretraining stabilizes United models pretraining, resulting in significant performance gains and annotation cost reduction via transfer learning in six target tasks, ranging from classification to segmentation, across diseases, organs, datasets, and modalities. This performance improvement is attributed to the synergy of the three SSL ingredients in our United framework unleashed through stepwise incremental pretraining. Our codes and pretrained models are available at GitHub.com/JLiangLab/StepwisePretraining.

摘要

我们开发了一个统一框架,该框架集成了三种自监督学习(SSL)要素(判别式学习、恢复式学习和对抗式学习),实现了这三种学习要素之间的协同学习,并产生了三个可迁移组件:一个判别式编码器、一个恢复式解码器和一个对抗式编码器。为了利用这种协同作用,我们重新设计了九种著名的自监督方法,包括旋转、拼图、魔方、深度聚类、TransVW、动量对比(MoCo)、自监督对比学习(BYOL)、策略对比强化学习(PCRL)和Swin UNETR,并在用于3D医学成像的统一框架中用其缺失的组件对每种方法进行了扩充。然而,这样一个统一框架增加了模型的复杂性,使得3D预训练变得困难。为了克服这一困难,我们提出了逐步增量预训练,这是一种统一预训练的策略,其中首先通过判别式学习训练一个判别式编码器,然后将预训练的判别式编码器连接到一个恢复式解码器上,形成一个跳跃连接的编码器 - 解码器,用于进一步的联合判别式和恢复式学习。最后,将预训练的编码器 - 解码器与一个对抗式编码器关联起来,进行最终的全判别式、恢复式和对抗式学习。我们广泛的实验表明,逐步增量预训练稳定了统一模型的预训练,通过在六个目标任务(从分类到分割,跨越疾病、器官、数据集和模态)中的迁移学习,显著提高了性能并降低了标注成本。这种性能提升归因于通过逐步增量预训练释放的我们统一框架中三种SSL要素的协同作用。我们的代码和预训练模型可在GitHub.com/JLiangLab/StepwisePretraining上获取。

相似文献

1
Stepwise incremental pretraining for integrating discriminative, restorative, and adversarial learning.
Med Image Anal. 2024 Jul;95:103159. doi: 10.1016/j.media.2024.103159. Epub 2024 Apr 16.
2
Discriminative, Restorative, and Adversarial Learning: Stepwise Incremental Pretraining.
Domain Adapt Represent Transf (2022). 2022 Sep;13542:66-76. doi: 10.1007/978-3-031-16852-9_7. Epub 2022 Sep 15.
5
Atraumatic restorative treatment versus conventional restorative treatment for managing dental caries.
Cochrane Database Syst Rev. 2017 Dec 28;12(12):CD008072. doi: 10.1002/14651858.CD008072.pub2.
6
Predicting cognitive decline: Deep-learning reveals subtle brain changes in pre-MCI stage.
J Prev Alzheimers Dis. 2025 May;12(5):100079. doi: 10.1016/j.tjpad.2025.100079. Epub 2025 Feb 6.
9
Automatic Segmentation and Alignment of Uterine Shapes from 3D Ultrasound Data.
Comput Biol Med. 2024 Aug;178:108794. doi: 10.1016/j.compbiomed.2024.108794. Epub 2024 Jun 27.
10
Incentives for preventing smoking in children and adolescents.
Cochrane Database Syst Rev. 2017 Jun 6;6(6):CD008645. doi: 10.1002/14651858.CD008645.pub3.

引用本文的文献

1
Representing Part-Whole Hierarchies in Foundation Models by Learning Localizability, Composability, and Decomposability from Anatomy via Self-Supervision.
Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit. 2024 Jun;abs/210504906(2024):11269-11281. doi: 10.1109/cvpr52733.2024.01071. Epub 2024 Sep 16.
2
Self-supervised learning framework application for medical image analysis: a review and summary.
Biomed Eng Online. 2024 Oct 27;23(1):107. doi: 10.1186/s12938-024-01299-9.
3
Self-supervised learning for medical image analysis: Discriminative, restorative, or adversarial?
Med Image Anal. 2024 May;94:103086. doi: 10.1016/j.media.2024.103086. Epub 2024 Jan 28.

本文引用的文献

1
DiRA: Discriminative, Restorative, and Adversarial Learning for Self-supervised Medical Image Analysis.
Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit. 2022 Jun;2022:20792-20802. doi: 10.1109/cvpr52688.2022.02016. Epub 2022 Sep 27.
2
Guest Editorial Annotation-Efficient Deep Learning: The Holy Grail of Medical Imaging.
IEEE Trans Med Imaging. 2021 Oct;40(10):2526-2533. doi: 10.1109/tmi.2021.3089292. Epub 2021 Sep 30.
3
Transferable Visual Words: Exploiting the Semantics of Anatomical Patterns for Self-Supervised Learning.
IEEE Trans Med Imaging. 2021 Oct;40(10):2857-2868. doi: 10.1109/TMI.2021.3060634. Epub 2021 Sep 30.
4
Models Genesis.
Med Image Anal. 2021 Jan;67:101840. doi: 10.1016/j.media.2020.101840. Epub 2020 Oct 13.
5
Self-Supervised Visual Feature Learning With Deep Neural Networks: A Survey.
IEEE Trans Pattern Anal Mach Intell. 2021 Nov;43(11):4037-4058. doi: 10.1109/TPAMI.2020.2992393. Epub 2021 Oct 1.
6
Computer-aided detection and visualization of pulmonary embolism using a novel, compact, and discriminative image representation.
Med Image Anal. 2019 Dec;58:101541. doi: 10.1016/j.media.2019.101541. Epub 2019 Aug 6.
8
The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS).
IEEE Trans Med Imaging. 2015 Oct;34(10):1993-2024. doi: 10.1109/TMI.2014.2377694. Epub 2014 Dec 4.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验