Suppr超能文献

通过相互匹配关系建模增强医学视觉语言对比学习

Enhancing Medical Vision-Language Contrastive Learning via Inter-Matching Relation Modeling.

作者信息

Li Mingjian, Meng Mingyuan, Fulham Michael, Feng David Dagan, Bi Lei, Kim Jinman

出版信息

IEEE Trans Med Imaging. 2025 Jun;44(6):2463-2476. doi: 10.1109/TMI.2025.3534436.

Abstract

Medical image representations can be learned through medical vision-language contrastive learning (mVLCL) where medical imaging reports are used as weak supervision through image-text alignment. These learned image representations can be transferred to and benefit various downstream medical vision tasks such as disease classification and segmentation. Recent mVLCL methods attempt to align image sub-regions and the report keywords as local-matchings. However, these methods aggregate all local-matchings via simple pooling operations while ignoring the inherent relations between them. These methods therefore fail to reason between local-matchings that are semantically related, e.g., local-matchings that correspond to the disease word and the location word (semantic-relations), and also fail to differentiate such clinically important local-matchings from others that correspond to less meaningful words, e.g., conjunction words (importance-relations). Hence, we propose a mVLCL method that models the inter-matching relations between local-matchings via a relation-enhanced contrastive learning framework (RECLF). In RECLF, we introduce a semantic-relation reasoning module (SRM) and an importance-relation reasoning module (IRM) to enable more fine-grained report supervision for image representation learning. We evaluated our method using six public benchmark datasets on four downstream tasks, including segmentation, zero-shot classification, linear classification, and cross-modal retrieval. Our results demonstrated the superiority of our RECLF over the state-of-the-art mVLCL methods with consistent improvements across single-modal and cross-modal tasks. These results suggest that our RECLF, by modeling the inter-matching relations, can learn improved medical image representations with better generalization capabilities.

摘要

医学图像表示可以通过医学视觉-语言对比学习(mVLCL)来学习,其中医学成像报告通过图像-文本对齐用作弱监督。这些学习到的图像表示可以迁移到各种下游医学视觉任务(如疾病分类和分割)并使其受益。最近的mVLCL方法试图将图像子区域和报告关键词对齐为局部匹配。然而,这些方法通过简单的池化操作聚合所有局部匹配,同时忽略它们之间的内在关系。因此,这些方法无法在语义相关的局部匹配之间进行推理,例如与疾病词和位置词对应的局部匹配(语义关系),也无法将这种临床上重要的局部匹配与其他对应于意义较小的词(如连词)的局部匹配区分开来(重要性关系)。因此,我们提出了一种mVLCL方法,该方法通过关系增强对比学习框架(RECLF)对局部匹配之间的匹配关系进行建模。在RECLF中,我们引入了语义关系推理模块(SRM)和重要性关系推理模块(IRM),以实现对图像表示学习更细粒度的报告监督。我们在四个下游任务(包括分割、零样本分类、线性分类和跨模态检索)上使用六个公共基准数据集对我们的方法进行了评估。我们的结果表明,我们的RECLF优于现有最先进的mVLCL方法,在单模态和跨模态任务中都有持续的改进。这些结果表明,我们的RECLF通过对匹配关系进行建模,可以学习到具有更好泛化能力的改进医学图像表示。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验