Suppr超能文献

通过自我发现、自我分类和自我恢复学习语义丰富的表示。

Learning Semantics-enriched Representation via Self-discovery, Self-classification, and Self-restoration.

作者信息

Haghighi Fatemeh, Hosseinzadeh Taher Mohammad Reza, Zhou Zongwei, Gotway Michael B, Liang Jianming

机构信息

Arizona State University, Tempe AZ 85281, USA.

Mayo Clinic, Scottsdale AZ 85259, USA.

出版信息

Med Image Comput Comput Assist Interv. 2020 Oct;12261:137-147. doi: 10.1007/978-3-030-59710-8_14. Epub 2020 Sep 29.

Abstract

Medical images are naturally associated with rich semantics about the human anatomy, reflected in an abundance of recurring anatomical patterns, offering unique potential to foster deep semantic representation learning and yield semantically more powerful models for different medical applications. But how exactly such strong yet free semantics embedded in medical images can be harnessed for self-supervised learning remains largely unexplored. To this end, we train deep models to learn semantically enriched visual representation by self-discovery, self-classification, and self-restoration of the anatomy underneath medical images, resulting in a semantics-enriched, general-purpose, pre-trained 3D model, named Semantic Genesis. We examine our Semantic Genesis with all the publicly-available pre-trained models, by either self-supervision or fully supervision, on the six distinct target tasks, covering both classification and segmentation in various medical modalities (., CT, MRI, and X-ray). Our extensive experiments demonstrate that Semantic Genesis significantly exceeds all of its 3D counterparts as well as the ImageNet-based transfer learning in 2D. This performance is attributed to our novel self-supervised learning framework, encouraging deep models to learn compelling semantic representation from abundant anatomical patterns resulting from consistent anatomies embedded in medical images. Code and pre-trained Semantic Genesis are available at https://github.com/JLiangLab/SemanticGenesis.

摘要

医学图像自然地与有关人体解剖结构的丰富语义相关联,这体现在大量反复出现的解剖模式中,为促进深度语义表征学习以及为不同医学应用生成语义更强大的模型提供了独特潜力。但是,如何确切地利用医学图像中嵌入的这种强大而自由的语义进行自监督学习,在很大程度上仍未得到探索。为此,我们训练深度模型,通过对医学图像下方解剖结构的自我发现、自我分类和自我恢复来学习语义丰富的视觉表征,从而得到一个名为语义起源(Semantic Genesis)的语义丰富、通用的预训练3D模型。我们在六个不同的目标任务上,通过自监督或完全监督,将我们的语义起源模型与所有公开可用的预训练模型进行比较,这些任务涵盖了各种医学模态(如CT、MRI和X射线)中的分类和分割。我们广泛的实验表明,语义起源模型显著超越了所有其他3D对应模型以及基于ImageNet的2D迁移学习。这种性能归因于我们新颖的自监督学习框架,该框架鼓励深度模型从医学图像中嵌入的一致解剖结构所产生的丰富解剖模式中学习引人注目的语义表征。代码和预训练的语义起源模型可在https://github.com/JLiangLab/SemanticGenesis获取。

相似文献

2
Models Genesis.模型起源。
Med Image Anal. 2021 Jan;67:101840. doi: 10.1016/j.media.2020.101840. Epub 2020 Oct 13.
3
Models Genesis: Generic Autodidactic Models for 3D Medical Image Analysis.模型起源:用于3D医学图像分析的通用自学习模型
Med Image Comput Comput Assist Interv. 2019 Oct;11767:384-393. doi: 10.1007/978-3-030-32251-9_42. Epub 2019 Oct 10.
6
Linear semantic transformation for semi-supervised medical image segmentation.线性语义变换在半监督医学图像分割中的应用。
Comput Biol Med. 2024 May;173:108331. doi: 10.1016/j.compbiomed.2024.108331. Epub 2024 Mar 21.
7
A Systematic Benchmarking Analysis of Transfer Learning for Medical Image Analysis.医学图像分析中迁移学习的系统基准分析
Domain Adapt Represent Transf Afford Healthc AI Resour Divers Glob Health (2021). 2021 Sep-Oct;12968:3-13. doi: 10.1007/978-3-030-87722-4_1. Epub 2021 Sep 21.
8
DiRA: Discriminative, Restorative, and Adversarial Learning for Self-supervised Medical Image Analysis.DiRA:用于自监督医学图像分析的判别式、恢复式和对抗式学习
Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit. 2022 Jun;2022:20792-20802. doi: 10.1109/cvpr52688.2022.02016. Epub 2022 Sep 27.
9
Group-Wise Learning for Weakly Supervised Semantic Segmentation.基于群体学习的弱监督语义分割。
IEEE Trans Image Process. 2022;31:799-811. doi: 10.1109/TIP.2021.3132834. Epub 2022 Jan 4.
10
Parts2Whole: Self-supervised Contrastive Learning via Reconstruction.从部分到整体:通过重建进行自监督对比学习
Domain Adapt Represent Transf Distrib Collab Learn (2020). 2020 Oct;12444:85-95. doi: 10.1007/978-3-030-60548-3_9. Epub 2020 Sep 26.

引用本文的文献

7
Decoding phenotypic screening: A comparative analysis of image representations.解码表型筛选:图像表示的比较分析
Comput Struct Biotechnol J. 2024 Mar 12;23:1181-1188. doi: 10.1016/j.csbj.2024.02.022. eCollection 2024 Dec.

本文引用的文献

1
Models Genesis: Generic Autodidactic Models for 3D Medical Image Analysis.模型起源:用于3D医学图像分析的通用自学习模型
Med Image Comput Comput Assist Interv. 2019 Oct;11767:384-393. doi: 10.1007/978-3-030-32251-9_42. Epub 2019 Oct 10.
4
Exploiting the potential of unlabeled endoscopic video data with self-supervised learning.利用自监督学习挖掘未标记内镜视频数据的潜力。
Int J Comput Assist Radiol Surg. 2018 Jun;13(6):925-933. doi: 10.1007/s11548-018-1772-0. Epub 2018 Apr 27.
5
NiftyNet: a deep-learning platform for medical imaging.NiftyNet:一个用于医学成像的深度学习平台。
Comput Methods Programs Biomed. 2018 May;158:113-122. doi: 10.1016/j.cmpb.2018.01.025. Epub 2018 Jan 31.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验