• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

揭示隐式生成器歧视器的分布脆弱性。

Revealing the Distributional Vulnerability of Discriminators by Implicit Generators.

出版信息

IEEE Trans Pattern Anal Mach Intell. 2023 Jul;45(7):8888-8901. doi: 10.1109/TPAMI.2022.3229318. Epub 2023 Jun 5.

DOI:10.1109/TPAMI.2022.3229318
PMID:37015685
Abstract

In deep neural learning, a discriminator trained on in-distribution (ID) samples may make high-confidence predictions on out-of-distribution (OOD) samples. This triggers a significant matter for robust, trustworthy and safe deep learning. The issue is primarily caused by the limited ID samples observable in training the discriminator when OOD samples are unavailable. We propose a general approach for fine-tuning discriminators by implicit generators (FIG). FIG is grounded on information theory and applicable to standard discriminators without retraining. It improves the ability of a standard discriminator in distinguishing ID and OOD samples by generating and penalizing its specific OOD samples. According to the Shannon entropy, an energy-based implicit generator is inferred from a discriminator without extra training costs. Then, a Langevin dynamic sampler draws specific OOD samples for the implicit generator. Lastly, we design a regularizer fitting the design principle of the implicit generator to induce high entropy on those generated OOD samples. The experiments on different networks and datasets demonstrate that FIG achieves the state-of-the-art OOD detection performance.

摘要

在深度神经网络学习中,经过有分布(ID)样本训练的鉴别器可能会对无分布(OOD)样本做出高置信度的预测。这对稳健、可信和安全的深度学习提出了一个重大问题。这个问题主要是由于鉴别器在没有 OOD 样本的情况下,在训练中只能观察到有限的 ID 样本而导致的。我们提出了一种通过隐式生成器(FIG)微调鉴别器的通用方法。FIG 基于信息论,适用于无需重新训练的标准鉴别器。它通过生成和惩罚特定的 OOD 样本,提高了标准鉴别器区分 ID 和 OOD 样本的能力。根据香农熵,从没有额外训练成本的鉴别器推断出基于能量的隐式生成器。然后, Langevin 动态采样器为隐式生成器抽取特定的 OOD 样本。最后,我们设计了一个正则化项,以符合隐式生成器的设计原则,从而在生成的 OOD 样本上诱导出高熵。在不同的网络和数据集上的实验表明,FIG 实现了最先进的 OOD 检测性能。

相似文献

1
Revealing the Distributional Vulnerability of Discriminators by Implicit Generators.揭示隐式生成器歧视器的分布脆弱性。
IEEE Trans Pattern Anal Mach Intell. 2023 Jul;45(7):8888-8901. doi: 10.1109/TPAMI.2022.3229318. Epub 2023 Jun 5.
2
Supervision Adaptation Balancing In-Distribution Generalization and Out-of-Distribution Detection.监督适应:平衡分布内泛化与分布外检测
IEEE Trans Pattern Anal Mach Intell. 2023 Dec;45(12):15743-15758. doi: 10.1109/TPAMI.2023.3321869. Epub 2023 Nov 3.
3
ReSmooth: Detecting and Utilizing OOD Samples When Training With Data Augmentation.ReSmooth:在使用数据增强进行训练时检测和利用分布外样本。
IEEE Trans Neural Netw Learn Syst. 2024 Jun;35(6):7899-7910. doi: 10.1109/TNNLS.2022.3222044. Epub 2024 Jun 3.
4
WOOD: Wasserstein-Based Out-of-Distribution Detection.伍德:基于瓦瑟斯坦距离的分布外检测。
IEEE Trans Pattern Anal Mach Intell. 2024 Feb;46(2):944-956. doi: 10.1109/TPAMI.2023.3328883. Epub 2024 Jan 8.
5
From Global to Local: Multi-scale Out-of-distribution Detection.从全局到局部:多尺度分布外检测
IEEE Trans Image Process. 2023 Nov 3;PP. doi: 10.1109/TIP.2023.3328478.
6
Semantic enhanced for out-of-distribution detection.用于分布外检测的语义增强。
Front Neurorobot. 2022 Nov 3;16:1018383. doi: 10.3389/fnbot.2022.1018383. eCollection 2022.
7
MLR-OOD: A Markov Chain Based Likelihood Ratio Method for Out-Of-Distribution Detection of Genomic Sequences.MLR-OOD:基于马尔可夫链的基因组序列分布外检测似然比方法。
J Mol Biol. 2022 Aug 15;434(15):167586. doi: 10.1016/j.jmb.2022.167586. Epub 2022 Apr 12.
8
Investigation of out-of-distribution detection across various models and training methodologies.跨多种模型和训练方法的分布外检测研究。
Neural Netw. 2024 Jul;175:106288. doi: 10.1016/j.neunet.2024.106288. Epub 2024 Apr 4.
9
Entropic Out-of-Distribution Detection: Seamless Detection of Unknown Examples.信息论离群检测:未知样本的无缝检测。
IEEE Trans Neural Netw Learn Syst. 2022 Jun;33(6):2350-2364. doi: 10.1109/TNNLS.2021.3112897. Epub 2022 Jun 1.
10
The impact of fine-tuning paradigms on unknown plant diseases recognition.微调范式对未知植物病害识别的影响。
Sci Rep. 2024 Aug 2;14(1):17900. doi: 10.1038/s41598-024-66958-2.

引用本文的文献

1
Dense Out-of-Distribution Detection by Robust Learning on Synthetic Negative Data.通过对合成负数据进行稳健学习实现密集的分布外检测
Sensors (Basel). 2024 Feb 15;24(4):1248. doi: 10.3390/s24041248.