• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

感知器网络上误差泛函的最小化。

Minimization of error functionals over perceptron networks.

作者信息

Kůrková Vera

机构信息

Institute of Computer Science, Academy of Sciences of the Czech Republic, Prague, CZ 18207.

出版信息

Neural Comput. 2008 Jan;20(1):252-70. doi: 10.1162/neco.2008.20.1.252.

DOI:10.1162/neco.2008.20.1.252
PMID:18045008
Abstract

Supervised learning of perceptron networks is investigated as an optimization problem. It is shown that both the theoretical and the empirical error functionals achieve minima over sets of functions computable by networks with a given number n of perceptrons. Upper bounds on rates of convergence of these minima with n increasing are derived. The bounds depend on a certain regularity of training data expressed in terms of variational norms of functions interpolating the data (in the case of the empirical error) and the regression function (in the case of the expected error). Dependence of this type of regularity on dimensionality and on magnitudes of partial derivatives is investigated. Conditions on the data, which guarantee that a good approximation of global minima of error functionals can be achieved using networks with a limited complexity, are derived. The conditions are in terms of oscillatory behavior of the data measured by the product of a function of the number of variables d, which is decreasing exponentially fast, and the maximum of the magnitudes of the squares of the L(1)-norms of the iterated partial derivatives of the order d of the regression function or some function, which interpolates the sample of the data. The results are illustrated by examples of data with small and high regularity constructed using Boolean functions and the gaussian function.

摘要

将感知器网络的监督学习作为一个优化问题进行研究。结果表明,理论误差泛函和经验误差泛函在具有给定数量n个感知器的网络可计算的函数集上都能达到最小值。推导了随着n增加这些最小值的收敛速率的上界。这些上界取决于以插值数据的函数(在经验误差情况下)和回归函数(在期望误差情况下)的变分范数表示的训练数据的某种正则性。研究了这种正则性类型对维度和偏导数大小的依赖性。推导了关于数据的条件,这些条件保证使用具有有限复杂度的网络能够实现误差泛函全局最小值的良好近似。这些条件是根据由变量数量d的函数(其以指数速度快速下降)与回归函数或插值数据样本的某个函数的d阶迭代偏导数的L(1)范数平方的最大值的乘积所衡量的数据振荡行为来表述的。通过使用布尔函数和高斯函数构建的具有低正则性和高正则性的数据示例来说明结果。

相似文献

1
Minimization of error functionals over perceptron networks.感知器网络上误差泛函的最小化。
Neural Comput. 2008 Jan;20(1):252-70. doi: 10.1162/neco.2008.20.1.252.
2
A learning rule for very simple universal approximators consisting of a single layer of perceptrons.一种由单层感知器组成的非常简单的通用逼近器的学习规则。
Neural Netw. 2008 Jun;21(5):786-95. doi: 10.1016/j.neunet.2007.12.036. Epub 2007 Dec 31.
3
Partial distortion entropy maximization for online data clustering.用于在线数据聚类的部分失真熵最大化
Neural Netw. 2007 Sep;20(7):819-31. doi: 10.1016/j.neunet.2007.04.029. Epub 2007 Jul 6.
4
The MEE principle in data classification: a perceptron-based analysis.数据分类中的 MEE 原则:基于感知器的分析。
Neural Comput. 2010 Oct;22(10):2698-728. doi: 10.1162/NECO_a_00013.
5
An integral upper bound for neural network approximation.神经网络逼近的一个积分上界。
Neural Comput. 2009 Oct;21(10):2970-89. doi: 10.1162/neco.2009.04-08-745.
6
Semi-supervised learning based on high density region estimation.基于高密度区域估计的半监督学习。
Neural Netw. 2010 Sep;23(7):812-8. doi: 10.1016/j.neunet.2010.06.001. Epub 2010 Jun 11.
7
Sign-representation of Boolean functions using a small number of monomials.使用少量单项式的布尔函数的符号表示。
Neural Netw. 2009 Sep;22(7):938-48. doi: 10.1016/j.neunet.2009.03.016. Epub 2009 Apr 5.
8
Convergence analysis of three classes of split-complex gradient algorithms for complex-valued recurrent neural networks.三类分裂复梯度算法在复值递归神经网络中的收敛性分析。
Neural Comput. 2010 Oct;22(10):2655-77. doi: 10.1162/NECO_a_00021.
9
The loading problem for recursive neural networks.递归神经网络的负载问题。
Neural Netw. 2005 Oct;18(8):1064-79. doi: 10.1016/j.neunet.2005.07.006. Epub 2005 Sep 29.
10
Evolutionary product unit based neural networks for regression.基于进化乘积单元的回归神经网络。
Neural Netw. 2006 May;19(4):477-86. doi: 10.1016/j.neunet.2005.11.001. Epub 2006 Feb 14.