• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相似文献

1
Asymptotic Properties of Neural Network Sieve Estimators.神经网络筛估计量的渐近性质
J Nonparametr Stat. 2023;35(4):839-868. doi: 10.1080/10485252.2023.2209218. Epub 2023 May 13.
2
Universal sieve-based strategies for efficient estimation using machine learning tools.使用机器学习工具进行高效估计的基于通用筛法的策略。
Bernoulli (Andover). 2021 Nov;27(4):2300-2336. doi: 10.3150/20-BEJ1309. Epub 2021 Aug 24.
3
A goodness-of-fit test based on neural network sieve estimators.基于神经网络筛分估计量的拟合优度检验。
Stat Probab Lett. 2021 Jul;174. doi: 10.1016/j.spl.2021.109100. Epub 2021 Mar 26.
4
Universal approximation theorem for vector- and hypercomplex-valued neural networks.向量值和超复数值神经网络的通用逼近定理。
Neural Netw. 2024 Dec;180:106632. doi: 10.1016/j.neunet.2024.106632. Epub 2024 Aug 13.
5
Nonlinear function-on-scalar regression via functional universal approximation.通过泛函一致逼近进行标量上的非线性函数回归。
Biometrics. 2023 Dec;79(4):3319-3331. doi: 10.1111/biom.13838. Epub 2023 Feb 27.
6
Constructive approximation to multivariate function by decay RBF neural network.基于衰减径向基函数神经网络的多元函数构造逼近
IEEE Trans Neural Netw. 2010 Sep;21(9):1517-23. doi: 10.1109/TNN.2010.2055888. Epub 2010 Aug 5.
7
Design of asymptotic estimators: an approach based on neural networks and nonlinear programming.渐近估计器的设计:一种基于神经网络和非线性规划的方法。
IEEE Trans Neural Netw. 2007 Jan;18(1):86-96. doi: 10.1109/TNN.2006.883015.
8
Consistency of posterior distributions for neural networks.
Neural Netw. 2000 Jul;13(6):629-42. doi: 10.1016/s0893-6080(00)00045-9.
9
Stochastic complexities of reduced rank regression in Bayesian estimation.贝叶斯估计中降秩回归的随机复杂性
Neural Netw. 2005 Sep;18(7):924-33. doi: 10.1016/j.neunet.2005.03.014.
10
Universal approximation of extreme learning machine with adaptive growth of hidden nodes.具有隐节点自适应增长的极限学习机的通用逼近。
IEEE Trans Neural Netw Learn Syst. 2012 Feb;23(2):365-71. doi: 10.1109/TNNLS.2011.2178124.

引用本文的文献

1
An exploration of testing genetic associations using goodness-of-fit statistics based on deep ReLU neural networks.基于深度ReLU神经网络,使用拟合优度统计量探索基因关联检测。
Front Syst Biol. 2024 Nov 18;4:1460369. doi: 10.3389/fsysb.2024.1460369. eCollection 2024.
2
Neural networks for geospatial data.用于地理空间数据的神经网络。
J Am Stat Assoc. 2025;120(549):535-547. doi: 10.1080/01621459.2024.2356293. Epub 2024 Jun 24.
3
A goodness-of-fit test based on neural network sieve estimators.基于神经网络筛分估计量的拟合优度检验。
Stat Probab Lett. 2021 Jul;174. doi: 10.1016/j.spl.2021.109100. Epub 2021 Mar 26.

本文引用的文献

1
A goodness-of-fit test based on neural network sieve estimators.基于神经网络筛分估计量的拟合优度检验。
Stat Probab Lett. 2021 Jul;174. doi: 10.1016/j.spl.2021.109100. Epub 2021 Mar 26.
2
Semiparametric ARX neural-network models with an application to forecasting inflation.
IEEE Trans Neural Netw. 2001;12(4):674-83. doi: 10.1109/72.935081.

神经网络筛估计量的渐近性质

Asymptotic Properties of Neural Network Sieve Estimators.

作者信息

Shen Xiaoxi, Jiang Chang, Sakhanenko Lyudamila, Lu Qing

机构信息

Texas State University, San Marcos, TX, USA.

University of Florida, Gainesville, FL, USA.

出版信息

J Nonparametr Stat. 2023;35(4):839-868. doi: 10.1080/10485252.2023.2209218. Epub 2023 May 13.

DOI:10.1080/10485252.2023.2209218
PMID:38169985
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10760986/
Abstract

Neural networks have become one of the most popularly used methods in machine learning and artificial intelligence. Due to the universal approximation theorem (Hornik et al., 1989), a neural network with one hidden layer can approximate any continuous function on compact support as long as the number of hidden units is sufficiently large. Statistically, a neural network can be classified into a nonlinear regression framework. However, if we consider it parametrically, due to the unidentifiability of the parameters, it is difficult to derive its asymptotic properties. Instead, we consider the estimation problem in a nonparametric regression framework and use the results from sieve estimation to establish the consistency, the rates of convergence and the asymptotic normality of the neural network estimators. We also illustrate the validity of the theories via simulations.

摘要

神经网络已成为机器学习和人工智能中最常用的方法之一。根据泛逼近定理(霍尼克等人,1989年),只要隐藏单元的数量足够大,具有一个隐藏层的神经网络就可以逼近紧致支撑集上的任何连续函数。从统计学角度来看,神经网络可归类为非线性回归框架。然而,如果从参数角度考虑,由于参数的不可识别性,很难推导出其渐近性质。相反,我们在非参数回归框架中考虑估计问题,并利用筛法估计的结果来建立神经网络估计量的一致性、收敛速度和渐近正态性。我们还通过模拟来说明这些理论的有效性。