Suppr超能文献

跨多种模型和训练方法的分布外检测研究。

Investigation of out-of-distribution detection across various models and training methodologies.

机构信息

Institute of Applied Mathematics, Inha University, 100 Inha-ro, Michuhol-gu, 22212, Incheon, Republic of Korea; AI Lab, SmartSocial, 140 Suyeonggangbyeon-daero, Haeundae-gu, 48058, Busan, Republic of Korea.

Department of Mathematics, Inha University, 100 Inha-ro, Michuhol-gu, 22212, Incheon, Republic of Korea.

出版信息

Neural Netw. 2024 Jul;175:106288. doi: 10.1016/j.neunet.2024.106288. Epub 2024 Apr 4.

Abstract

Machine learning-based algorithms demonstrate impressive performance across numerous fields; however, they continue to suffer from certain limitations. Even sophisticated and precise algorithms often make erroneous predictions when implemented with datasets having different distributions compared to the training set. Out-of-distribution (OOD) detection, which distinguishes data with different distributions from that of the training set, is a critical research area necessary to overcome these limitations and create more reliable algorithms. The OOD issue, particularly concerning image data, has been extensively studied. However, recently developed OOD methods do not fulfill the expectation that OOD performance will increase as the accuracy of in-distribution classification improves. Our research presents a comprehensive study on OOD detection performance across multiple models and training methodologies to verify this phenomenon. Specifically, we explore various pre-trained models popular in the computer vision field with both old and new OOD detection methods. The experimental results highlight the performance disparity in existing OOD methods. Based on these observations, we introduce Trimmed Rank with Inverse softMax probability (TRIM), a remarkably simple yet effective method for model weights with newly developed training methods. The proposed method could serve as a potential tool for enhancing OOD detection performance owing to its promising results. The OOD performance of TRIM is highly compatible with the in-distribution accuracy model and may bridge the efforts on improving in-distribution accuracy to the ability to distinguish OOD data.

摘要

基于机器学习的算法在许多领域表现出令人印象深刻的性能;然而,它们仍然存在某些局限性。即使是复杂和精确的算法,在将其应用于与训练集分布不同的数据集时,也经常会做出错误的预测。区分与训练集分布不同的数据的离群检测(OOD)是克服这些局限性并创建更可靠算法的必要关键研究领域。离群检测问题,特别是涉及图像数据的问题,已经得到了广泛的研究。然而,最近开发的 OOD 方法并没有达到这样的期望,即随着分布内分类准确性的提高,OOD 性能会提高。我们的研究对多个模型和训练方法的 OOD 检测性能进行了全面研究,以验证这一现象。具体来说,我们探索了计算机视觉领域中使用的各种流行的预训练模型,以及新旧的 OOD 检测方法。实验结果突出了现有 OOD 方法的性能差异。基于这些观察结果,我们引入了一种新的训练方法,即修剪秩与逆 softMax 概率(TRIM),这是一种非常简单但有效的模型权重方法。由于其有希望的结果,该方法可以作为增强 OOD 检测性能的潜在工具。TRIM 的 OOD 性能与分布内准确性模型高度兼容,并且可能有助于将提高分布内准确性的努力与区分 OOD 数据的能力联系起来。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验