Suppr超能文献

MedCLIP:从未配对医学图像和文本中进行对比学习。

MedCLIP: Contrastive Learning from Unpaired Medical Images and Text.

作者信息

Wang Zifeng, Wu Zhenbang, Agarwal Dinesh, Sun Jimeng

机构信息

Department of Computer Science, University of Illinois Urbana-Champaign.

Adobe.

出版信息

Proc Conf Empir Methods Nat Lang Process. 2022 Dec;2022:3876-3887. doi: 10.18653/v1/2022.emnlp-main.256.

Abstract

Existing vision-text contrastive learning like CLIP (Radford et al., 2021) aims to match the paired image and caption embeddings while pushing others apart, which improves representation transferability and supports zero-shot prediction. However, medical image-text datasets are orders of magnitude below the general images and captions from the internet. Moreover, previous methods encounter many false negatives, i.e., images and reports from separate patients probably carry the same semantics but are wrongly treated as negatives. In this paper, we decouple images and texts for multimodal contrastive learning thus scaling the usable training data in a combinatorial magnitude with low cost. We also propose to replace the InfoNCE loss with semantic matching loss based on medical knowledge to eliminate false negatives in contrastive learning. We prove that MedCLIP is a simple yet effective framework: it outperforms state-of-the-art methods on zero-shot prediction, supervised classification, and image-text retrieval. Surprisingly, we observe that with only 20K pre-training data, MedCLIP wins over the state-of-the-art method (using ≈200K data).

摘要

现有的视觉-文本对比学习方法,如CLIP(拉德福德等人,2021年),旨在匹配配对的图像和标题嵌入,同时将其他嵌入分开,这提高了表示的可迁移性并支持零样本预测。然而,医学图像-文本数据集比来自互联网的一般图像和标题少几个数量级。此外,以前的方法会遇到许多假阴性情况,即来自不同患者的图像和报告可能具有相同的语义,但却被错误地视为阴性。在本文中,我们将图像和文本解耦以进行多模态对比学习,从而以低成本在组合量级上扩展可用训练数据。我们还建议用基于医学知识的语义匹配损失取代InfoNCE损失,以消除对比学习中的假阴性。我们证明MedCLIP是一个简单而有效的框架:它在零样本预测、监督分类和图像-文本检索方面优于现有方法。令人惊讶的是,我们观察到,仅使用20K预训练数据,MedCLIP就超过了现有方法(使用约200K数据)。

相似文献

1
MedCLIP: Contrastive Learning from Unpaired Medical Images and Text.MedCLIP:从未配对医学图像和文本中进行对比学习。
Proc Conf Empir Methods Nat Lang Process. 2022 Dec;2022:3876-3887. doi: 10.18653/v1/2022.emnlp-main.256.
4
Unpaired Image-Text Matching via Multimodal Aligned Conceptual Knowledge.通过多模态对齐概念知识实现非配对图像-文本匹配
IEEE Trans Pattern Anal Mach Intell. 2025 Jul;47(7):5160-5176. doi: 10.1109/TPAMI.2024.3432552.
5
10
ProtoCLIP: Prototypical Contrastive Language Image Pretraining.ProtoCLIP:原型对比语言图像预训练
IEEE Trans Neural Netw Learn Syst. 2025 Jan;36(1):610-624. doi: 10.1109/TNNLS.2023.3335859. Epub 2025 Jan 7.

引用本文的文献

7
Minimum levels of interpretability for artificial moral agents.人工道德主体的最低可解释性水平。
AI Ethics. 2025;5(3):2071-2087. doi: 10.1007/s43681-024-00536-0. Epub 2024 Jul 31.
10
Unity in Diversity: Collaborative Pre-training Across Multimodal Medical Sources.多元中的统一:跨多模态医学资源的协作式预训练
Proc Conf Assoc Comput Linguist Meet. 2024 Aug;2024(Volume 1 Long Papers):3644-3656. doi: 10.18653/v1/2024.acl-long.199.

本文引用的文献

1
PiCO+: Contrastive Label Disambiguation for Robust Partial Label Learning.PiCO+:用于稳健部分标签学习的对比标签消歧
IEEE Trans Pattern Anal Mach Intell. 2024 May;46(5):3183-3198. doi: 10.1109/TPAMI.2023.3342650. Epub 2024 Apr 3.
2
Deep learning for chest X-ray analysis: A survey.深度学习在胸部 X 光分析中的应用:综述。
Med Image Anal. 2021 Aug;72:102125. doi: 10.1016/j.media.2021.102125. Epub 2021 Jun 5.
5
A Characteristic Chest Radiographic Pattern in the Setting of the COVID-19 Pandemic.COVID-19大流行背景下的一种特征性胸部X线表现模式。
Radiol Cardiothorac Imaging. 2020 Sep 3;2(5):e200280. doi: 10.1148/ryct.2020200280. eCollection 2020 Oct.
8
An overview of MetaMap: historical perspective and recent advances.MetaMap 概述:历史视角与最新进展。
J Am Med Inform Assoc. 2010 May-Jun;17(3):229-36. doi: 10.1136/jamia.2009.002733.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验