Suppr超能文献

信息瓶颈泛函变分界的比较

A Comparison of Variational Bounds for the Information Bottleneck Functional.

作者信息

Geiger Bernhard C, Fischer Ian S

机构信息

Know-Center GmbH, Inffeldgasse 13/6, 8010 Graz, Austria.

Google Research, Mountain View, CA 94043, USA.

出版信息

Entropy (Basel). 2020 Oct 29;22(11):1229. doi: 10.3390/e22111229.

Abstract

In this short note, we relate the variational bounds proposed in Alemi et al. (2017) and Fischer (2020) for the information bottleneck (IB) and the conditional entropy bottleneck (CEB) functional, respectively. Although the two functionals were shown to be equivalent, it was empirically observed that optimizing bounds on the CEB functional achieves better generalization performance and adversarial robustness than optimizing those on the IB functional. This work tries to shed light on this issue by showing that, in the most general setting, no ordering can be established between these variational bounds, while such an ordering can be enforced by restricting the feasible sets over which the optimizations take place. The absence of such an ordering in the general setup suggests that the variational bound on the CEB functional is either more amenable to optimization or a relevant cost function for optimization in its own regard, i.e., without justification from the IB or CEB functionals.

摘要

在本短文里,我们分别关联了阿莱米等人(2017年)和菲舍尔(2020年)针对信息瓶颈(IB)和条件熵瓶颈(CEB)泛函所提出的变分界。尽管已证明这两个泛函是等价的,但从经验上观察到,优化CEB泛函的界比优化IB泛函的界能实现更好的泛化性能和对抗鲁棒性。这项工作试图通过表明在最一般的设定下,无法在这些变分界之间建立排序关系,而通过限制进行优化的可行集可以强制建立这样的排序关系,来阐明这个问题。在一般设置中不存在这样的排序关系,这表明CEB泛函的变分界要么更易于优化,要么就其本身而言是一个用于优化的相关成本函数,即无需从IB或CEB泛函进行论证。

相似文献

1
A Comparison of Variational Bounds for the Information Bottleneck Functional.
Entropy (Basel). 2020 Oct 29;22(11):1229. doi: 10.3390/e22111229.
2
The Conditional Entropy Bottleneck.
Entropy (Basel). 2020 Sep 8;22(9):999. doi: 10.3390/e22090999.
3
CEB Improves Model Robustness.
Entropy (Basel). 2020 Sep 25;22(10):1081. doi: 10.3390/e22101081.
4
CCGIB: A Cross-Channel Graph Information Bottleneck Principle.
IEEE Trans Neural Netw Learn Syst. 2025 May;36(5):9488-9499. doi: 10.1109/TNNLS.2024.3413750. Epub 2025 May 2.
5
On the Difference between the Information Bottleneck and the Deep Information Bottleneck.
Entropy (Basel). 2020 Jan 22;22(2):131. doi: 10.3390/e22020131.
6
Learning Representations for Neural Network-Based Classification Using the Information Bottleneck Principle.
IEEE Trans Pattern Anal Mach Intell. 2020 Sep;42(9):2225-2239. doi: 10.1109/TPAMI.2019.2909031. Epub 2019 Apr 2.
7
The Deterministic Information Bottleneck.
Neural Comput. 2017 Jun;29(6):1611-1630. doi: 10.1162/NECO_a_00961. Epub 2017 Apr 14.
8
Adversarial Training With Anti-Adversaries.
IEEE Trans Pattern Anal Mach Intell. 2024 Dec;46(12):10210-10227. doi: 10.1109/TPAMI.2024.3432973. Epub 2024 Nov 6.
9
Adversarial Information Bottleneck.
IEEE Trans Neural Netw Learn Syst. 2022 May 20;PP. doi: 10.1109/TNNLS.2022.3172986.
10
Variational Information Bottleneck for Semi-Supervised Classification.
Entropy (Basel). 2020 Aug 27;22(9):943. doi: 10.3390/e22090943.

引用本文的文献

1
The Supervised Information Bottleneck.
Entropy (Basel). 2025 Apr 22;27(5):452. doi: 10.3390/e27050452.
2
Partial Information Decomposition: Redundancy as Information Bottleneck.
Entropy (Basel). 2024 Jun 26;26(7):546. doi: 10.3390/e26070546.
3
Information Bottleneck Analysis by a Conditional Mutual Information Bound.
Entropy (Basel). 2021 Jul 29;23(8):974. doi: 10.3390/e23080974.
4
Information Bottleneck: Theory and Applications in Deep Learning.
Entropy (Basel). 2020 Dec 14;22(12):1408. doi: 10.3390/e22121408.

本文引用的文献

1
CEB Improves Model Robustness.
Entropy (Basel). 2020 Sep 25;22(10):1081. doi: 10.3390/e22101081.
2
The Conditional Entropy Bottleneck.
Entropy (Basel). 2020 Sep 8;22(9):999. doi: 10.3390/e22090999.
3
On the Difference between the Information Bottleneck and the Deep Information Bottleneck.
Entropy (Basel). 2020 Jan 22;22(2):131. doi: 10.3390/e22020131.
4
Learning Representations for Neural Network-Based Classification Using the Information Bottleneck Principle.
IEEE Trans Pattern Anal Mach Intell. 2020 Sep;42(9):2225-2239. doi: 10.1109/TPAMI.2019.2909031. Epub 2019 Apr 2.
5
Information Dropout: Learning Optimal Representations Through Noisy Computation.
IEEE Trans Pattern Anal Mach Intell. 2018 Dec;40(12):2897-2905. doi: 10.1109/TPAMI.2017.2784440. Epub 2018 Jan 10.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验