Quétant Guillaume, Belousov Yury, Kinakh Vitaliy, Voloshynovskiy Slava
Centre Universitaire d'Informatique, Université de Genève, Route de Drize 7, CH-1227 Carouge, Switzerland.
Entropy (Basel). 2023 Oct 21;25(10):1471. doi: 10.3390/e25101471.
We present a novel information-theoretic framework, termed as TURBO, designed to systematically analyse and generalise auto-encoding methods. We start by examining the principles of information bottleneck and bottleneck-based networks in the auto-encoding setting and identifying their inherent limitations, which become more prominent for data with multiple relevant, physics-related representations. The TURBO framework is then introduced, providing a comprehensive derivation of its core concept consisting of the maximisation of mutual information between various data representations expressed in two directions reflecting the information flows. We illustrate that numerous prevalent neural network models are encompassed within this framework. The paper underscores the insufficiency of the information bottleneck concept in elucidating all such models, thereby establishing TURBO as a preferable theoretical reference. The introduction of TURBO contributes to a richer understanding of data representation and the structure of neural network models, enabling more efficient and versatile applications.
我们提出了一种名为TURBO的新颖信息理论框架,旨在系统地分析和推广自动编码方法。我们首先研究自动编码设置中信息瓶颈和基于瓶颈的网络的原理,并识别它们的固有局限性,对于具有多个相关的、与物理相关的表示的数据,这些局限性变得更加突出。然后引入TURBO框架,全面推导其核心概念,该核心概念包括在反映信息流的两个方向上表达的各种数据表示之间的互信息最大化。我们说明了这个框架涵盖了许多流行的神经网络模型。本文强调了信息瓶颈概念在解释所有此类模型方面的不足,从而将TURBO确立为更可取的理论参考。TURBO的引入有助于更深入地理解数据表示和神经网络模型的结构,实现更高效和通用的应用。