• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

压缩神经网络回归的估计。

Estimates on compressed neural networks regression.

机构信息

Department of Information and Mathematics Sciences, China Jiliang University, Hangzhou 310018, Zhejiang Province, PR China.

Department of Information and Mathematics Sciences, China Jiliang University, Hangzhou 310018, Zhejiang Province, PR China.

出版信息

Neural Netw. 2015 Mar;63:10-7. doi: 10.1016/j.neunet.2014.10.008. Epub 2014 Nov 10.

DOI:10.1016/j.neunet.2014.10.008
PMID:25463391
Abstract

When the neural element number n of neural networks is larger than the sample size m, the overfitting problem arises since there are more parameters than actual data (more variable than constraints). In order to overcome the overfitting problem, we propose to reduce the number of neural elements by using compressed projection A which does not need to satisfy the condition of Restricted Isometric Property (RIP). By applying probability inequalities and approximation properties of the feedforward neural networks (FNNs), we prove that solving the FNNs regression learning algorithm in the compressed domain instead of the original domain reduces the sample error at the price of an increased (but controlled) approximation error, where the covering number theory is used to estimate the excess error, and an upper bound of the excess error is given.

摘要

当神经网络的神经元素数量 n 大于样本大小 m 时,由于参数比实际数据多(变量比约束多),就会出现过拟合问题。为了解决过拟合问题,我们提出使用不需要满足限制等距特性(RIP)条件的压缩投影 A 来减少神经元素的数量。通过应用概率不等式和前馈神经网络(FNN)的逼近特性,我们证明在压缩域而不是原始域中求解 FNN 回归学习算法可以降低样本误差,代价是增加(但可控制)的逼近误差,其中覆盖数理论用于估计过差,并且给出了过差的上界。

相似文献

1
Estimates on compressed neural networks regression.压缩神经网络回归的估计。
Neural Netw. 2015 Mar;63:10-7. doi: 10.1016/j.neunet.2014.10.008. Epub 2014 Nov 10.
2
Simultaneous L(p)-approximation order for neural networks.神经网络的同时L(p)逼近阶
Neural Netw. 2005 Sep;18(7):914-23. doi: 10.1016/j.neunet.2005.03.013.
3
Scalable learning method for feedforward neural networks using minimal-enclosing-ball approximation.基于最小包围球逼近的前馈神经网络可扩展学习方法。
Neural Netw. 2016 Jun;78:51-64. doi: 10.1016/j.neunet.2016.02.005. Epub 2016 Apr 1.
4
Optimized approximation algorithm in neural networks without overfitting.神经网络中无过拟合的优化近似算法。
IEEE Trans Neural Netw. 2008 Jun;19(6):983-95. doi: 10.1109/TNN.2007.915114.
5
Computational capabilities of graph neural networks.图神经网络的计算能力。
IEEE Trans Neural Netw. 2009 Jan;20(1):81-102. doi: 10.1109/TNN.2008.2005141.
6
New training strategies for constructive neural networks with application to regression problems.用于构造性神经网络的新训练策略及其在回归问题中的应用。
Neural Netw. 2004 May;17(4):589-609. doi: 10.1016/j.neunet.2004.02.002.
7
A systematic and effective supervised learning mechanism based on Jacobian rank deficiency.一种基于雅可比矩阵秩亏缺的系统且有效的监督学习机制。
Neural Comput. 1998 May 15;10(4):1031-45. doi: 10.1162/089976698300017610.
8
Robust adaptive learning of feedforward neural networks via LMI optimizations.通过 LMI 优化实现前馈神经网络的鲁棒自适应学习。
Neural Netw. 2012 Jul;31:33-45. doi: 10.1016/j.neunet.2012.03.003. Epub 2012 Mar 14.
9
Robust sequential learning of feedforward neural networks in the presence of heavy-tailed noise.存在重尾噪声时前馈神经网络的稳健顺序学习。
Neural Netw. 2015 Mar;63:31-47. doi: 10.1016/j.neunet.2014.11.001. Epub 2014 Nov 15.
10
Comparing support vector machines and feedforward neural networks with similar hidden-layer weights.比较具有相似隐藏层权重的支持向量机和前馈神经网络。
IEEE Trans Neural Netw. 2007 May;18(3):959-63. doi: 10.1109/TNN.2007.891656.