Suppr超能文献

具有通用激活函数的深度神经网络对光滑函数的逼近

Smooth Function Approximation by Deep Neural Networks with General Activation Functions.

作者信息

Ohn Ilsang, Kim Yongdai

机构信息

Department of Statistics, Seoul National University, Seoul 08826, Korea.

出版信息

Entropy (Basel). 2019 Jun 26;21(7):627. doi: 10.3390/e21070627.

Abstract

There has been a growing interest in expressivity of deep neural networks. However, most of the existing work about this topic focuses only on the specific activation function such as ReLU or sigmoid. In this paper, we investigate the approximation ability of deep neural networks with a broad class of activation functions. This class of activation functions includes most of frequently used activation functions. We derive the required depth, width and sparsity of a deep neural network to approximate any Hölder smooth function upto a given approximation error for the large class of activation functions. Based on our approximation error analysis, we derive the minimax optimality of the deep neural network estimators with the general activation functions in both regression and classification problems.

摘要

人们对深度神经网络的表现力越来越感兴趣。然而,关于这个主题的现有工作大多只关注特定的激活函数,如ReLU或sigmoid。在本文中,我们研究了具有广泛激活函数类别的深度神经网络的逼近能力。这类激活函数包括了大多数常用的激活函数。我们推导了深度神经网络所需的深度、宽度和稀疏性,以便在给定的逼近误差范围内,对一大类激活函数逼近任何赫尔德平滑函数。基于我们的逼近误差分析,我们推导了在回归和分类问题中具有一般激活函数的深度神经网络估计器的极小极大最优性。

相似文献

3
Approximation of smooth functionals using deep ReLU networks.使用深度 ReLU 网络逼近光滑泛函。
Neural Netw. 2023 Sep;166:424-436. doi: 10.1016/j.neunet.2023.07.012. Epub 2023 Jul 18.
6
Neural networks with ReLU powers need less depth.ReLU 激活函数的神经网络需要的深度更小。
Neural Netw. 2024 Apr;172:106073. doi: 10.1016/j.neunet.2023.12.027. Epub 2023 Dec 19.
7
Simultaneous neural network approximation for smooth functions.用于光滑函数的同时神经网络逼近。
Neural Netw. 2022 Oct;154:152-164. doi: 10.1016/j.neunet.2022.06.040. Epub 2022 Jul 9.
9
Deep ReLU neural networks in high-dimensional approximation.高维逼近中的深度 ReLU 神经网络。
Neural Netw. 2021 Oct;142:619-635. doi: 10.1016/j.neunet.2021.07.027. Epub 2021 Jul 29.
10
Approximate Policy Iteration With Deep Minimax Average Bellman Error Minimization.基于深度极小极大平均贝尔曼误差最小化的近似策略迭代
IEEE Trans Neural Netw Learn Syst. 2025 Feb;36(2):2288-2299. doi: 10.1109/TNNLS.2023.3346992. Epub 2025 Feb 6.

本文引用的文献

1
Fast convergence rates of deep neural networks for classification.用于分类的深度神经网络的快速收敛率。
Neural Netw. 2021 Jun;138:179-197. doi: 10.1016/j.neunet.2021.02.012. Epub 2021 Feb 23.
4
Error bounds for approximations with deep ReLU networks.深度 ReLU 网络逼近的误差界。
Neural Netw. 2017 Oct;94:103-114. doi: 10.1016/j.neunet.2017.07.002. Epub 2017 Jul 13.
5
Deep learning.深度学习。
Nature. 2015 May 28;521(7553):436-44. doi: 10.1038/nature14539.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验