Suppr超能文献

判别式、恢复式和对抗式学习:逐步增量预训练

Discriminative, Restorative, and Adversarial Learning: Stepwise Incremental Pretraining.

作者信息

Guo Zuwei, Islam Nahid Ui, Gotway Michael B, Liang Jianming

机构信息

Arizona State University, Tempe, AZ 85281, USA.

Mayo Clinic, Scottsdale, AZ 85259, USA.

出版信息

Domain Adapt Represent Transf (2022). 2022 Sep;13542:66-76. doi: 10.1007/978-3-031-16852-9_7. Epub 2022 Sep 15.

Abstract

Uniting three self-supervised learning (SSL) ingredients (discriminative, restorative, and adversarial learning) enables collaborative representation learning and yields three transferable components: a discriminative encoder, a restorative decoder, and an adversary encoder. To leverage this advantage, we have redesigned five prominent SSL methods, including Rotation, Jigsaw, Rubik's Cube, Deep Clustering, and TransVW, and formulated each in a framework for 3D medical imaging. However, such a United framework increases model complexity and pretraining difficulty. To overcome this difficulty, we develop a stepwise incremental pretraining strategy, in which a discriminative encoder is first trained via discriminative learning, the pretrained discriminative encoder is then attached to a restorative decoder, forming a skip-connected encoder-decoder, for further joint discriminative and restorative learning, and finally, the pretrained encoder-decoder is associated with an adversarial encoder for final full discriminative, restorative, and adversarial learning. Our extensive experiments demonstrate that the stepwise incremental pretraining stabilizes United models training, resulting in significant performance gains and annotation cost reduction via transfer learning for five target tasks, encompassing both classification and segmentation, across diseases, organs, datasets, and modalities. This performance is attributed to the synergy of the three SSL ingredients in our United framework unleashed via stepwise incremental pretraining. All codes and pretrained models are available at GitHub.com/JLiangLab/StepwisePretraining.

摘要

将三种自监督学习(SSL)要素(判别式、恢复式和对抗式学习)结合起来能够实现协作表示学习,并产生三个可转移组件:一个判别式编码器、一个恢复式解码器和一个对抗式编码器。为了利用这一优势,我们重新设计了五种著名的SSL方法,包括旋转、拼图、魔方、深度聚类和TransVW,并将每种方法应用于一个3D医学成像框架中。然而,这样一个统一的框架增加了模型的复杂性和预训练难度。为了克服这一困难,我们开发了一种逐步增量预训练策略,其中首先通过判别式学习训练一个判别式编码器,然后将预训练的判别式编码器连接到一个恢复式解码器上,形成一个跳跃连接的编码器-解码器,用于进一步的联合判别式和恢复式学习,最后,将预训练的编码器-解码器与一个对抗式编码器关联起来,进行最终的全判别式、恢复式和对抗式学习。我们广泛的实验表明,逐步增量预训练稳定了统一模型的训练,通过针对包括分类和分割在内的五个目标任务进行迁移学习,在疾病、器官、数据集和模态方面均实现了显著的性能提升和标注成本降低。这种性能归因于通过逐步增量预训练在我们的统一框架中释放的三种SSL要素的协同作用。所有代码和预训练模型可在GitHub.com/JLiangLab/StepwisePretraining上获取。

相似文献

1
Discriminative, Restorative, and Adversarial Learning: Stepwise Incremental Pretraining.判别式、恢复式和对抗式学习:逐步增量预训练
Domain Adapt Represent Transf (2022). 2022 Sep;13542:66-76. doi: 10.1007/978-3-031-16852-9_7. Epub 2022 Sep 15.
3
DiRA: Discriminative, Restorative, and Adversarial Learning for Self-supervised Medical Image Analysis.DiRA:用于自监督医学图像分析的判别式、恢复式和对抗式学习
Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit. 2022 Jun;2022:20792-20802. doi: 10.1109/cvpr52688.2022.02016. Epub 2022 Sep 27.

本文引用的文献

1
DiRA: Discriminative, Restorative, and Adversarial Learning for Self-supervised Medical Image Analysis.DiRA:用于自监督医学图像分析的判别式、恢复式和对抗式学习
Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit. 2022 Jun;2022:20792-20802. doi: 10.1109/cvpr52688.2022.02016. Epub 2022 Sep 27.
2
4
Models Genesis.模型起源。
Med Image Anal. 2021 Jan;67:101840. doi: 10.1016/j.media.2020.101840. Epub 2020 Oct 13.
5
Self-Supervised Visual Feature Learning With Deep Neural Networks: A Survey.基于深度神经网络的自监督视觉特征学习:综述
IEEE Trans Pattern Anal Mach Intell. 2021 Nov;43(11):4037-4058. doi: 10.1109/TPAMI.2020.2992393. Epub 2021 Oct 1.
7
The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS).多模态脑肿瘤图像分割基准(BRATS)。
IEEE Trans Med Imaging. 2015 Oct;34(10):1993-2024. doi: 10.1109/TMI.2014.2377694. Epub 2014 Dec 4.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验