• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过学习注入知识来适配视觉语言模型。

Adapting Vision-Language Models via Learning to Inject Knowledge.

作者信息

Xuan Shiyu, Yang Ming, Zhang Shiliang

出版信息

IEEE Trans Image Process. 2024;33:5798-5809. doi: 10.1109/TIP.2024.3468884. Epub 2024 Oct 15.

DOI:10.1109/TIP.2024.3468884
PMID:39356597
Abstract

Pre-trained vision-language models (VLM) such as CLIP, have demonstrated impressive zero-shot performance on various vision tasks. Trained on millions or even billions of image-text pairs, the text encoder has memorized a substantial amount of appearance knowledge. Such knowledge in VLM is usually leveraged by learning specific task-oriented prompts, which may limit its performance in unseen tasks. This paper proposes a new knowledge injection framework to pursue a generalizable adaption of VLM to downstream vision tasks. Instead of learning task-specific prompts, we extract task-agnostic knowledge features, and insert them into features of input images or texts. The fused features hence gain better discriminative capability and robustness to intra-category variances. Those knowledge features are generated by inputting learnable prompt sentences into text encoder of VLM, and extracting its multi-layer features. A new knowledge injection module (KIM) is proposed to refine text features or visual features using knowledge features. This knowledge injection framework enables both modalities to benefit from the rich knowledge memorized in the text encoder. Experiments show that our method outperforms recently proposed methods under few-shot learning, base-to-new classes generalization, cross-dataset transfer, and domain generalization settings. For instance, it outperforms CoOp by 4.5% under the few-shot learning setting, and CoCoOp by 4.4% under the base-to-new classes generalization setting. Our code will be released.

摘要

诸如CLIP等预训练视觉语言模型(VLM)在各种视觉任务上展现出了令人印象深刻的零样本性能。在数百万甚至数十亿的图像-文本对上进行训练后,文本编码器已经记住了大量的外观知识。VLM中的此类知识通常通过学习特定任务的提示来加以利用,这可能会限制其在未见任务中的性能。本文提出了一种新的知识注入框架,以实现VLM对下游视觉任务的可泛化适应。我们不是学习特定任务的提示,而是提取与任务无关的知识特征,并将其插入到输入图像或文本的特征中。融合后的特征因此获得了更好的判别能力和对类别内差异的鲁棒性。这些知识特征是通过将可学习的提示语句输入到VLM的文本编码器中,并提取其多层特征而生成的。提出了一种新的知识注入模块(KIM),以使用知识特征来细化文本特征或视觉特征。这种知识注入框架使两种模态都能从文本编码器中存储的丰富知识中受益。实验表明,在少样本学习、基类到新类泛化、跨数据集迁移和域泛化设置下,我们的方法优于最近提出的方法。例如,在少样本学习设置下,它比CoOp高出4.5%,在基类到新类泛化设置下比CoCoOp高出4.4%。我们的代码将予以发布。

相似文献

1
Adapting Vision-Language Models via Learning to Inject Knowledge.通过学习注入知识来适配视觉语言模型。
IEEE Trans Image Process. 2024;33:5798-5809. doi: 10.1109/TIP.2024.3468884. Epub 2024 Oct 15.
2
Learning Domain Invariant Prompt for Vision-Language Models.用于视觉语言模型的学习领域不变提示
IEEE Trans Image Process. 2024;33:1348-1360. doi: 10.1109/TIP.2024.3362062. Epub 2024 Feb 14.
3
X -VLM: All-in-One Pre-Trained Model for Vision-Language Tasks.X-VLM:用于视觉语言任务的一体化预训练模型。
IEEE Trans Pattern Anal Mach Intell. 2024 May;46(5):3156-3168. doi: 10.1109/TPAMI.2023.3339661. Epub 2024 Apr 3.
4
Vision-Language Models for Vision Tasks: A Survey.用于视觉任务的视觉语言模型:一项综述。
IEEE Trans Pattern Anal Mach Intell. 2024 Aug;46(8):5625-5644. doi: 10.1109/TPAMI.2024.3369699. Epub 2024 Jul 2.
5
Zero-shot prompt-based video encoder for surgical gesture recognition.用于手术手势识别的基于零样本提示的视频编码器
Int J Comput Assist Radiol Surg. 2025 Feb;20(2):311-321. doi: 10.1007/s11548-024-03257-1. Epub 2024 Sep 17.
6
MCPL: Multi-Modal Collaborative Prompt Learning for Medical Vision-Language Model.MCPL:用于医学视觉语言模型的多模态协作提示学习
IEEE Trans Med Imaging. 2024 Dec;43(12):4224-4235. doi: 10.1109/TMI.2024.3418408. Epub 2024 Dec 2.
7
Fine-Grained Visual-Text Prompt-Driven Self-Training for Open-Vocabulary Object Detection.用于开放词汇目标检测的细粒度视觉文本提示驱动自训练
IEEE Trans Neural Netw Learn Syst. 2024 Nov;35(11):16277-16287. doi: 10.1109/TNNLS.2023.3293484. Epub 2024 Oct 29.
8
Prompt-guided and multimodal landscape scenicness assessments with vision-language models.基于提示的多模态景观美景评估与视觉语言模型。
PLoS One. 2024 Sep 30;19(9):e0307083. doi: 10.1371/journal.pone.0307083. eCollection 2024.
9
A Foundation Language-Image Model of the Retina (FLAIR): encoding expert knowledge in text supervision.视网膜的基础语言-图像模型(FLAIR):在文本监督中编码专家知识。
Med Image Anal. 2025 Jan;99:103357. doi: 10.1016/j.media.2024.103357. Epub 2024 Oct 1.
10
Utilizing Geographical Distribution Statistical Data to Improve Zero-Shot Species Recognition.利用地理分布统计数据改进零样本物种识别
Animals (Basel). 2024 Jun 7;14(12):1716. doi: 10.3390/ani14121716.