• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于词典的计算模型能优于最好的线性模型吗?

Can dictionary-based computational models outperform the best linear ones?

机构信息

Department of Communications, Computer, and System Sciences (DIST), University of Genoa, Via Opera Pia 13, 16145 Genova, Italy.

出版信息

Neural Netw. 2011 Oct;24(8):881-7. doi: 10.1016/j.neunet.2011.05.014. Epub 2011 Jun 12.

DOI:10.1016/j.neunet.2011.05.014
PMID:21704495
Abstract

Approximation capabilities of two types of computational models are explored: dictionary-based models (i.e., linear combinations of n-tuples of basis functions computable by units belonging to a set called "dictionary") and linear ones (i.e., linear combinations of n fixed basis functions). The two models are compared in terms of approximation rates, i.e., speeds of decrease of approximation errors for a growing number n of basis functions. Proofs of upper bounds on approximation rates by dictionary-based models are inspected, to show that for individual functions they do not imply estimates for dictionary-based models that do not hold also for some linear models. Instead, the possibility of getting faster approximation rates by dictionary-based models is demonstrated for worst-case errors in approximation of suitable sets of functions. For such sets, even geometric upper bounds hold.

摘要

探索了两种类型的计算模型的逼近能力

基于字典的模型(即,由属于称为“字典”的集合的单元可计算的 n 元组的线性组合)和线性模型(即,n 个固定基函数的线性组合)。根据逼近率(即,随着基函数数量 n 的增加,逼近误差的减小速度)对这两种模型进行了比较。检查了基于字典的模型逼近率的上界证明,以表明对于个别函数,它们并不意味着对于某些线性模型不成立的基于字典的模型的估计。相反,对于合适的函数集的逼近中的最坏情况误差,证明了基于字典的模型可以获得更快的逼近率的可能性。对于这样的集合,甚至可以保持几何上的上界。

相似文献

1
Can dictionary-based computational models outperform the best linear ones?基于词典的计算模型能优于最好的线性模型吗?
Neural Netw. 2011 Oct;24(8):881-7. doi: 10.1016/j.neunet.2011.05.014. Epub 2011 Jun 12.
2
Some comparisons of complexity in dictionary-based and linear computational models.基于字典和线性计算模型的复杂性比较。
Neural Netw. 2011 Mar;24(2):171-82. doi: 10.1016/j.neunet.2010.10.002. Epub 2010 Nov 19.
3
An integral upper bound for neural network approximation.神经网络逼近的一个积分上界。
Neural Comput. 2009 Oct;21(10):2970-89. doi: 10.1162/neco.2009.04-08-745.
4
Minimization of error functionals over perceptron networks.感知器网络上误差泛函的最小化。
Neural Comput. 2008 Jan;20(1):252-70. doi: 10.1162/neco.2008.20.1.252.
5
Neural networks with local receptive fields and superlinear VC dimension.具有局部感受野和超线性VC维的神经网络。
Neural Comput. 2002 Apr;14(4):919-56. doi: 10.1162/089976602317319018.
6
On the complexity of computing and learning with multiplicative neural networks.关于乘法神经网络的计算与学习复杂性
Neural Comput. 2002 Feb;14(2):241-301. doi: 10.1162/08997660252741121.
7
A comparative study of autoregressive neural network hybrids.自回归神经网络混合模型的比较研究。
Neural Netw. 2005 Jun-Jul;18(5-6):781-9. doi: 10.1016/j.neunet.2005.06.003.
8
Fading memory and kernel properties of generic cortical microcircuit models.通用皮质微电路模型的记忆衰退与内核特性
J Physiol Paris. 2004 Jul-Nov;98(4-6):315-30. doi: 10.1016/j.jphysparis.2005.09.020. Epub 2005 Nov 28.
9
Comparison of universal approximators incorporating partial monotonicity by structure.结构中包含部分单调性的通用逼近器的比较。
Neural Netw. 2010 May;23(4):471-5. doi: 10.1016/j.neunet.2009.09.002. Epub 2009 Sep 17.
10
Probabilistic lower bounds for approximation by shallow perceptron networks.浅层感知器网络逼近的概率下界
Neural Netw. 2017 Jul;91:34-41. doi: 10.1016/j.neunet.2017.04.003. Epub 2017 Apr 19.