Suppr超能文献

揭示隐式生成器歧视器的分布脆弱性。

Revealing the Distributional Vulnerability of Discriminators by Implicit Generators.

出版信息

IEEE Trans Pattern Anal Mach Intell. 2023 Jul;45(7):8888-8901. doi: 10.1109/TPAMI.2022.3229318. Epub 2023 Jun 5.

Abstract

In deep neural learning, a discriminator trained on in-distribution (ID) samples may make high-confidence predictions on out-of-distribution (OOD) samples. This triggers a significant matter for robust, trustworthy and safe deep learning. The issue is primarily caused by the limited ID samples observable in training the discriminator when OOD samples are unavailable. We propose a general approach for fine-tuning discriminators by implicit generators (FIG). FIG is grounded on information theory and applicable to standard discriminators without retraining. It improves the ability of a standard discriminator in distinguishing ID and OOD samples by generating and penalizing its specific OOD samples. According to the Shannon entropy, an energy-based implicit generator is inferred from a discriminator without extra training costs. Then, a Langevin dynamic sampler draws specific OOD samples for the implicit generator. Lastly, we design a regularizer fitting the design principle of the implicit generator to induce high entropy on those generated OOD samples. The experiments on different networks and datasets demonstrate that FIG achieves the state-of-the-art OOD detection performance.

摘要

在深度神经网络学习中,经过有分布(ID)样本训练的鉴别器可能会对无分布(OOD)样本做出高置信度的预测。这对稳健、可信和安全的深度学习提出了一个重大问题。这个问题主要是由于鉴别器在没有 OOD 样本的情况下,在训练中只能观察到有限的 ID 样本而导致的。我们提出了一种通过隐式生成器(FIG)微调鉴别器的通用方法。FIG 基于信息论,适用于无需重新训练的标准鉴别器。它通过生成和惩罚特定的 OOD 样本,提高了标准鉴别器区分 ID 和 OOD 样本的能力。根据香农熵,从没有额外训练成本的鉴别器推断出基于能量的隐式生成器。然后, Langevin 动态采样器为隐式生成器抽取特定的 OOD 样本。最后,我们设计了一个正则化项,以符合隐式生成器的设计原则,从而在生成的 OOD 样本上诱导出高熵。在不同的网络和数据集上的实验表明,FIG 实现了最先进的 OOD 检测性能。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验