Suppr超能文献

单次训练运行下探索回归变分信息瓶颈中的权衡

Exploring the Trade-Off in the Variational Information Bottleneck for Regression with a Single Training Run.

作者信息

Kudo Sota, Ono Naoaki, Kanaya Shigehiko, Huang Ming

机构信息

Graduate School of Science and Technology, Nara Institute of Science and Technology, Ikoma 630-0192, Japan.

Institute of Advanced Computing and Digital Engineering, Shenzhen Institute of Advanced Technology, Shenzhen 518055, China.

出版信息

Entropy (Basel). 2024 Nov 30;26(12):1043. doi: 10.3390/e26121043.

Abstract

An information bottleneck (IB) enables the acquisition of useful representations from data by retaining necessary information while reducing unnecessary information. In its objective function, the Lagrange multiplier β controls the trade-off between retention and reduction. This study analyzes the Variational Information Bottleneck (VIB), a standard IB method in deep learning, in the settings of regression problems and derives its optimal solution. Based on this analysis, we propose a framework for regression problems that can obtain the optimal solution of the VIB for all β values with a single training run. This is in contrast to conventional methods that require one training run for each β. The optimization performance of this framework is theoretically discussed and experimentally demonstrated. Our approach not only enhances the efficiency of exploring β in regression problems but also deepens the understanding of the IB's behavior and its effects in this setting.

摘要

信息瓶颈(IB)通过保留必要信息同时减少不必要信息,实现从数据中获取有用表示。在其目标函数中,拉格朗日乘数β控制着保留与减少之间的权衡。本研究在回归问题的背景下分析了变分信息瓶颈(VIB)——深度学习中的一种标准IB方法,并推导了其最优解。基于此分析,我们提出了一个用于回归问题的框架,该框架可以通过单次训练运行获得所有β值下VIB的最优解。这与传统方法形成对比,传统方法针对每个β都需要进行一次训练运行。从理论上讨论并通过实验证明了该框架的优化性能。我们的方法不仅提高了在回归问题中探索β的效率,还加深了对IB在此背景下的行为及其影响的理解。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验