Suppr超能文献

利用预训练模型特征增强少样本分布外检测

Enhancing Few-Shot Out-of-Distribution Detection With Pre-Trained Model Features.

作者信息

Dong Jiuqing, Yao Yifan, Jin Wei, Zhou Heng, Gao Yongbin, Fang Zhijun

出版信息

IEEE Trans Image Process. 2024;33:6309-6323. doi: 10.1109/TIP.2024.3468874. Epub 2024 Dec 27.

Abstract

Ensuring the reliability of open-world intelligent systems heavily relies on effective out-of-distribution (OOD) detection. Despite notable successes in existing OOD detection methods, their performance in scenarios with limited training samples is still suboptimal. Therefore, we first construct a comprehensive few-shot OOD detection benchmark in this paper. Remarkably, our investigation reveals that Parameter-Efficient Fine-Tuning (PEFT) techniques, such as visual prompt tuning and visual adapter tuning, outperform traditional methods like fully fine-tuning and linear probing tuning in few-shot OOD detection. Considering that some valuable information from the pre-trained model, which is conducive to OOD detection, may be lost during the fine-tuning process, we reutilize features from the pre-trained models to mitigate this issue. Specifically, we first propose a training-free approach, termed uncertainty score ensemble (USE). This method integrates feature-matching scores to enhance existing OOD detection methods, significantly narrowing the gap between traditional fine-tuning and PEFT techniques. However, due to its training-free property, this method is unable to improve in-distribution accuracy. To this end, we further propose a method called Domain-Specific and General Knowledge Fusion (DSGF) to improve few-shot OOD detection performance and ID accuracy under different fine-tuning paradigms. Experiment results demonstrate that DSGF enhances few-shot OOD detection across different fine-tuning strategies, shot settings, and OOD detection methods. We believe our work can provide the research community with a novel path to leveraging large-scale visual pre-trained models for addressing FS-OOD detection. The code will be released.

摘要

确保开放世界智能系统的可靠性在很大程度上依赖于有效的分布外(OOD)检测。尽管现有OOD检测方法取得了显著成功,但它们在训练样本有限的场景中的性能仍然不尽人意。因此,我们在本文中首先构建了一个全面的少样本OOD检测基准。值得注意的是,我们的研究表明,诸如视觉提示调整和视觉适配器调整等参数高效微调(PEFT)技术在少样本OOD检测中优于传统方法,如完全微调和平行探测微调。考虑到预训练模型中的一些有助于OOD检测的有价值信息可能在微调过程中丢失,我们重新利用预训练模型的特征来缓解这个问题。具体来说,我们首先提出一种无需训练的方法,称为不确定性分数集成(USE)。该方法整合特征匹配分数以增强现有的OOD检测方法,显著缩小了传统微调与PEFT技术之间的差距。然而,由于其无需训练的特性,该方法无法提高分布内准确率。为此,我们进一步提出一种称为特定领域与通用知识融合(DSGF)的方法,以在不同的微调范式下提高少样本OOD检测性能和分布内准确率。实验结果表明,DSGF在不同的微调策略、样本设置和OOD检测方法下都能增强少样本OOD检测。我们相信我们的工作可以为研究社区提供一条利用大规模视觉预训练模型来解决少样本OOD检测问题的新途径。代码将予以发布。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验