Suppr超能文献

二次深度网络的通用逼近。

Universal approximation with quadratic deep networks.

机构信息

Department of Biomedical Engineering, Rensselaer Polytechnic Institute, Troy, NY, 12180, USA.

IBM Thomas J. Watson Research Center, Yorktown Heights, NY, 10598, USA.

出版信息

Neural Netw. 2020 Apr;124:383-392. doi: 10.1016/j.neunet.2020.01.007. Epub 2020 Jan 18.

Abstract

Recently, deep learning has achieved huge successes in many important applications. In our previous studies, we proposed quadratic/second-order neurons and deep quadratic neural networks. In a quadratic neuron, the inner product of a vector of data and the corresponding weights in a conventional neuron is replaced with a quadratic function. The resultant quadratic neuron enjoys an enhanced expressive capability over the conventional neuron. However, how quadratic neurons improve the expressing capability of a deep quadratic network has not been studied up to now, preferably in relation to that of a conventional neural network. Specifically, we ask four basic questions in this paper: (1) for the one-hidden-layer network structure, is there any function that a quadratic network can approximate much more efficiently than a conventional network? (2) for the same multi-layer network structure, is there any function that can be expressed by a quadratic network but cannot be expressed with conventional neurons in the same structure? (3) Does a quadratic network give a new insight into universal approximation? (4) To approximate the same class of functions with the same error bound, could a quantized quadratic network have a lower number of weights than a quantized conventional network? Our main contributions are the four interconnected theorems shedding light upon these four questions and demonstrating the merits of a quadratic network in terms of expressive efficiency, unique capability, compact architecture and computational capacity respectively.

摘要

最近,深度学习在许多重要应用中取得了巨大成功。在我们之前的研究中,我们提出了二次/二阶神经元和深度二次神经网络。在二次神经元中,数据向量与传统神经元中相应权重的内积被替换为二次函数。与传统神经元相比,由此产生的二次神经元具有更强的表达能力。然而,到目前为止,还没有研究二次神经元如何提高深度二次网络的表达能力,最好与传统神经网络的表达能力相关。具体来说,本文提出了四个基本问题:(1)对于单隐藏层网络结构,是否存在任何函数,二次网络可以比传统网络更有效地逼近?(2)对于相同的多层网络结构,是否存在任何函数可以用二次网络表示,但在相同结构中不能用传统神经元表示?(3)二次网络是否为通用逼近提供了新的视角?(4)为了用相同的误差界逼近相同的函数类,量化的二次网络的权重是否可以比量化的传统网络少?我们的主要贡献是四个相互关联的定理,这些定理分别阐明了这些问题,并从表达效率、独特能力、紧凑的架构和计算能力等方面展示了二次网络的优势。

相似文献

1
Universal approximation with quadratic deep networks.二次深度网络的通用逼近。
Neural Netw. 2020 Apr;124:383-392. doi: 10.1016/j.neunet.2020.01.007. Epub 2020 Jan 18.
2
On Expressivity and Trainability of Quadratic Networks.关于二次网络的表现力与可训练性
IEEE Trans Neural Netw Learn Syst. 2025 Jan;36(1):1228-1242. doi: 10.1109/TNNLS.2023.3331380. Epub 2025 Jan 7.
5
Quadratic Autoencoder (Q-AE) for Low-Dose CT Denoising.用于低剂量 CT 去噪的二次自动编码器(Q-AE)
IEEE Trans Med Imaging. 2020 Jun;39(6):2035-2050. doi: 10.1109/TMI.2019.2963248. Epub 2019 Dec 31.
7
Theory of deep convolutional neural networks: Downsampling.深度卷积神经网络理论:下采样。
Neural Netw. 2020 Apr;124:319-327. doi: 10.1016/j.neunet.2020.01.018. Epub 2020 Jan 25.
9
A Deep-Network Piecewise Linear Approximation Formula.一种深度网络分段线性逼近公式。
IEEE Access. 2021;9:120665-120674. doi: 10.1109/access.2021.3109173. Epub 2021 Aug 31.
10
A new type of neurons for machine learning.一种用于机器学习的新型神经元。
Int J Numer Method Biomed Eng. 2018 Feb;34(2). doi: 10.1002/cnm.2920. Epub 2017 Sep 15.

引用本文的文献

5
9
Quadratic Autoencoder (Q-AE) for Low-Dose CT Denoising.用于低剂量 CT 去噪的二次自动编码器(Q-AE)
IEEE Trans Med Imaging. 2020 Jun;39(6):2035-2050. doi: 10.1109/TMI.2019.2963248. Epub 2019 Dec 31.

本文引用的文献

1
Universal Approximation Using Radial-Basis-Function Networks.使用径向基函数网络的通用逼近
Neural Comput. 1991 Summer;3(2):246-257. doi: 10.1162/neco.1991.3.2.246.
3
Dense Associative Memory Is Robust to Adversarial Inputs.密集联想记忆对对抗性输入具有鲁棒性。
Neural Comput. 2018 Dec;30(12):3151-3167. doi: 10.1162/neco_a_01143. Epub 2018 Oct 12.
7
Mastering the game of Go without human knowledge.无需人类知识即可掌握围棋游戏。
Nature. 2017 Oct 18;550(7676):354-359. doi: 10.1038/nature24270.
8
A new type of neurons for machine learning.一种用于机器学习的新型神经元。
Int J Numer Method Biomed Eng. 2018 Feb;34(2). doi: 10.1002/cnm.2920. Epub 2017 Sep 15.
9
Probabilistic lower bounds for approximation by shallow perceptron networks.浅层感知器网络逼近的概率下界
Neural Netw. 2017 Jul;91:34-41. doi: 10.1016/j.neunet.2017.04.003. Epub 2017 Apr 19.
10
Deep networks are effective encoders of periodicity.深度网络是周期性的有效编码器。
IEEE Trans Neural Netw Learn Syst. 2014 Oct;25(10):1816-27. doi: 10.1109/TNNLS.2013.2296046.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验