Suppr超能文献

深度卷积神经网络理论:下采样。

Theory of deep convolutional neural networks: Downsampling.

机构信息

School of Data Science and Department of Mathematics, City University of Hong Kong, Kowloon, Hong Kong.

出版信息

Neural Netw. 2020 Apr;124:319-327. doi: 10.1016/j.neunet.2020.01.018. Epub 2020 Jan 25.

Abstract

Establishing a solid theoretical foundation for structured deep neural networks is greatly desired due to the successful applications of deep learning in various practical domains. This paper aims at an approximation theory of deep convolutional neural networks whose structures are induced by convolutions. To overcome the difficulty in theoretical analysis of the networks with linearly increasing widths arising from convolutions, we introduce a downsampling operator to reduce the widths. We prove that the downsampled deep convolutional neural networks can be used to approximate ridge functions nicely, which hints some advantages of these structured networks in terms of approximation or modeling. We also prove that the output of any multi-layer fully-connected neural network can be realized by that of a downsampled deep convolutional neural network with free parameters of the same order, which shows that in general, the approximation ability of deep convolutional neural networks is at least as good as that of fully-connected networks. Finally, a theorem for approximating functions on Riemannian manifolds is presented, which demonstrates that deep convolutional neural networks can be used to learn manifold features of data.

摘要

由于深度学习在各个实际领域的成功应用,建立结构化深度神经网络的坚实理论基础是非常需要的。本文旨在研究卷积诱导的深度卷积神经网络的逼近理论。为了克服由于卷积引起的宽度线性增加的网络在理论分析方面的困难,我们引入了一个下采样算子来减小宽度。我们证明了下采样的深度卷积神经网络可以很好地逼近脊函数,这暗示了这些结构化网络在逼近或建模方面的一些优势。我们还证明了任何多层全连接神经网络的输出都可以通过具有相同阶自由参数的下采样深度卷积神经网络来实现,这表明一般来说,深度卷积神经网络的逼近能力至少与全连接网络一样好。最后,提出了一个关于黎曼流形上函数逼近的定理,证明了深度卷积神经网络可以用于学习数据的流形特征。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验