Suppr超能文献

通过语义感知微调增强少样本CLIP

Enhancing Few-Shot CLIP With Semantic-Aware Fine-Tuning.

作者信息

Zhu Yao, Chen Yuefeng, Mao Xiaofeng, Yan Xiu, Wang Yue, Lu Wang, Wang Jindong, Ji Xiangyang

出版信息

IEEE Trans Neural Netw Learn Syst. 2024 Aug 26;PP. doi: 10.1109/TNNLS.2024.3443394.

Abstract

Learning generalized representations from limited training samples is crucial for applying deep neural networks in low-resource scenarios. Recently, methods based on contrastive language-image pretraining (CLIP) have exhibited promising performance in few-shot adaptation tasks. To avoid catastrophic forgetting and overfitting caused by few-shot fine-tuning, existing works usually freeze the parameters of CLIP pretrained on large-scale datasets, overlooking the possibility that some parameters might not be suitable for downstream tasks. To this end, we revisit CLIP's visual encoder with a specific focus on its distinctive attention pooling layer, which performs a spatial weighted-sum of the dense feature maps. Given that dense feature maps contain meaningful semantic information, and different semantics hold varying importance for diverse downstream tasks (such as prioritizing semantics like ears and eyes in pet classification tasks rather than side mirrors), using the same weighted-sum operation for dense features across different few-shot tasks might not be appropriate. Hence, we propose fine-tuning the parameters of the attention pooling layer during the training process to encourage the model to focus on task-specific semantics. In the inference process, we perform residual blending between the features pooled by the fine-tuned and the original attention pooling layers to incorporate both the few-shot knowledge and the pretrained CLIP's prior knowledge. We term this method as semantic-aware fine-tuning (). is effective in enhancing the conventional few-shot CLIP and is compatible with the existing adapter approach (termed ). Extensive experiments on 11 benchmarks demonstrate that both and significantly outperform the second-best method by 1.51 and 2.38 in the one-shot setting and by 0.48 and 1.37 in the four-shot setting, respectively.

摘要

从有限的训练样本中学习通用表示对于在低资源场景中应用深度神经网络至关重要。最近,基于对比语言-图像预训练(CLIP)的方法在少样本适应任务中表现出了有前景的性能。为了避免少样本微调导致的灾难性遗忘和过拟合,现有工作通常冻结在大规模数据集上预训练的CLIP的参数,而忽略了一些参数可能不适用于下游任务的可能性。为此,我们重新审视CLIP的视觉编码器,特别关注其独特的注意力池化层,该层对密集特征图执行空间加权求和。鉴于密集特征图包含有意义的语义信息,并且不同的语义对于不同的下游任务具有不同的重要性(例如在宠物分类任务中优先考虑耳朵和眼睛等语义而不是侧后视镜),在不同的少样本任务中对密集特征使用相同的加权求和操作可能不合适。因此,我们建议在训练过程中微调注意力池化层的参数,以鼓励模型关注特定任务的语义。在推理过程中,我们在微调后的注意力池化层和原始注意力池化层池化的特征之间进行残差融合,以结合少样本知识和预训练的CLIP的先验知识。我们将此方法称为语义感知微调()。该方法在增强传统的少样本CLIP方面是有效的,并且与现有的适配器方法(称为)兼容。在11个基准上进行的大量实验表明,在单样本设置下,和分别比第二好的方法显著高出1.51和2.38,在四样本设置下分别高出0.48和1.37。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验