Suppr超能文献

基于互信息正则化的深度神经网络收敛行为

Convergence Behavior of DNNs with Mutual-Information-Based Regularization.

作者信息

Jónsson Hlynur, Cherubini Giovanni, Eleftheriou Evangelos

机构信息

IBM Research Zurich, 8803 Rüschlikon, Switzerland.

出版信息

Entropy (Basel). 2020 Jun 30;22(7):727. doi: 10.3390/e22070727.

Abstract

Information theory concepts are leveraged with the goal of better understanding and improving Deep Neural Networks (DNNs). The information plane of neural networks describes the behavior during training of the mutual information at various depths between input/output and hidden-layer variables. Previous analysis revealed that most of the training epochs are spent on compressing the input, in some networks where finiteness of the mutual information can be established. However, the estimation of mutual information is nontrivial for high-dimensional continuous random variables. Therefore, the computation of the mutual information for DNNs and its visualization on the information plane mostly focused on low-complexity fully connected networks. In fact, even the existence of the compression phase in complex DNNs has been questioned and viewed as an open problem. In this paper, we present the convergence of mutual information on the information plane for a high-dimensional VGG-16 Convolutional Neural Network (CNN) by resorting to Mutual Information Neural Estimation (MINE), thus confirming and extending the results obtained with low-dimensional fully connected networks. Furthermore, we demonstrate the benefits of regularizing a network, especially for a large number of training epochs, by adopting mutual information estimates as additional terms in the loss function characteristic of the network. Experimental results show that the regularization stabilizes the test accuracy and significantly reduces its variance.

摘要

信息论概念被用于更好地理解和改进深度神经网络(DNN)。神经网络的信息平面描述了在训练期间输入/输出与隐藏层变量之间不同深度处互信息的行为。先前的分析表明,在一些能够确定互信息有限性的网络中,大部分训练轮次都花在压缩输入上。然而,对于高维连续随机变量,互信息的估计并非易事。因此,DNN互信息的计算及其在信息平面上的可视化大多集中在低复杂度的全连接网络上。事实上,复杂DNN中压缩阶段的存在甚至也受到质疑,并被视为一个开放性问题。在本文中,我们借助互信息神经估计(MINE)方法,给出了高维VGG - 16卷积神经网络(CNN)在信息平面上互信息的收敛情况,从而证实并扩展了低维全连接网络所得到的结果。此外,我们通过将互信息估计作为网络损失函数的附加项,展示了对网络进行正则化的好处,特别是对于大量训练轮次而言。实验结果表明,正则化使测试准确率稳定,并显著降低了其方差。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/4992/7517266/0ce6283aced1/entropy-22-00727-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验