Suppr超能文献

深入挖掘,更好地泛化:深度学习的信息论视角

Going Deeper, Generalizing Better: An Information-Theoretic View for Deep Learning.

作者信息

Zhang Jingwei, Liu Tongliang, Tao Dacheng

出版信息

IEEE Trans Neural Netw Learn Syst. 2024 Nov;35(11):16683-16695. doi: 10.1109/TNNLS.2023.3297113. Epub 2024 Oct 29.

Abstract

Deep learning has transformed computer vision, natural language processing, and speech recognition. However, two critical questions remain obscure: 1) why do deep neural networks (DNNs) generalize better than shallow networks and 2) does it always hold that a deeper network leads to better performance? In this article, we first show that the expected generalization error of neural networks (NNs) can be upper bounded by the mutual information between the learned features in the last hidden layer and the parameters of the output layer. This bound further implies that as the number of layers increases in the network, the expected generalization error will decrease under mild conditions. Layers with strict information loss, such as the convolutional or pooling layers, reduce the generalization error for the whole network; this answers the first question. However, algorithms with zero expected generalization error do not imply a small test error. This is because the expected training error is large when the information for fitting the data is lost as the number of layers increases. This suggests that the claim "the deeper the better" is conditioned on a small training error. Finally, we show that deep learning satisfies a weak notion of stability and provides some generalization error bounds for noisy stochastic gradient decent (SGD) and binary classification in DNNs.

摘要

深度学习已经改变了计算机视觉、自然语言处理和语音识别。然而,两个关键问题仍然模糊不清:1)为什么深度神经网络(DNN)比浅层网络具有更好的泛化能力,以及2)深度网络是否总是能带来更好的性能?在本文中,我们首先表明神经网络(NN)的期望泛化误差可以由最后一个隐藏层中学习到的特征与输出层参数之间的互信息来上界约束。这个约束进一步意味着,在温和条件下,随着网络层数的增加,期望泛化误差会减小。具有严格信息损失的层,如卷积层或池化层,会降低整个网络的泛化误差;这回答了第一个问题。然而,期望泛化误差为零的算法并不意味着测试误差小。这是因为当随着层数增加拟合数据的信息丢失时,期望训练误差会很大。这表明“越深越好”的说法是以小训练误差为条件的。最后,我们表明深度学习满足一种弱稳定性概念,并为深度神经网络中的噪声随机梯度下降(SGD)和二分类提供了一些泛化误差界。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验