Guo Meng-Hao, Zhang Yi, Mu Tai-Jiang, Huang Sharon X, Hu Shi-Min
IEEE Trans Pattern Anal Mach Intell. 2024 Dec;46(12):11186-11199. doi: 10.1109/TPAMI.2024.3460180. Epub 2024 Nov 6.
Benefiting from advances in large-scale pre-training, foundation models, have demonstrated remarkable capability in the fields of natural language processing, computer vision, among others. However, to achieve expert-level performance in specific applications, such models often need to be fine-tuned with domain-specific knowledge. In this paper, we focus on enabling vision-language models to unleash more potential for visual understanding tasks under few-shot tuning. Specifically, we propose a novel adapter, dubbed as lusterAdapter, which is based on trainable multiple prototypes clustering algorithm, for tuning the CLIP model. It can not only alleviate the concern of catastrophic forgetting of foundation models by introducing anchors to inherit common knowledge, but also improve the utilization efficiency of few annotated samples via bringing in clustering and domain priors, thereby improving the performance of few-shot tuning. We have conducted extensive experiments on 11 common classification benchmarks. The results show our method significantly surpasses the original CLIP and achieves state-of-the-art (SOTA) performance under all benchmarks and settings. For example, under the 16-shot setting, our method exhibits a remarkable improvement over the original CLIP by 19.6%, and also surpasses TIP-Adapter and GraphAdapter by 2.7% and 2.2%, respectively, in terms of average accuracy across the 11 benchmarks.
受益于大规模预训练的进展,基础模型在自然语言处理、计算机视觉等领域展现出了卓越的能力。然而,为了在特定应用中实现专家级性能,此类模型通常需要使用特定领域的知识进行微调。在本文中,我们专注于使视觉语言模型在少样本微调下释放更多视觉理解任务的潜力。具体而言,我们提出了一种新颖的适配器,称为lusterAdapter,它基于可训练的多原型聚类算法,用于微调CLIP模型。它不仅可以通过引入锚点来继承常识,缓解基础模型灾难性遗忘的问题,还可以通过引入聚类和领域先验来提高少量标注样本的利用效率,从而提高少样本微调的性能。我们在11个常见分类基准上进行了广泛的实验。结果表明,我们的方法显著超越了原始的CLIP,并在所有基准和设置下都达到了当前最优(SOTA)性能。例如,在16样本设置下,我们的方法相对于原始CLIP有显著提升,提升了19.6%,并且在11个基准的平均准确率方面,分别比TIP-Adapter和GraphAdapter高出2.7%和2.2%。