• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

罗森布拉特第一定理与深度学习的节俭性

Rosenblatt's First Theorem and Frugality of Deep Learning.

作者信息

Kirdin Alexander, Sidorov Sergey, Zolotykh Nikolai

机构信息

Institute of Information Technologies, Mathematics and Mechanics, Lobachevsky State University, 603022 Nizhni Novgorod, Russia.

Institute for Computational Modelling, Russian Academy of Sciences, Siberian Branch, 660036 Krasnoyarsk, Russia.

出版信息

Entropy (Basel). 2022 Nov 10;24(11):1635. doi: 10.3390/e24111635.

DOI:10.3390/e24111635
PMID:36359726
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9689667/
Abstract

The Rosenblatt's first theorem about the omnipotence of shallow networks states that elementary perceptrons can solve any classification problem if there are no discrepancies in the training set. Minsky and Papert considered elementary perceptrons with restrictions on the neural inputs: a bounded number of connections or a relatively small diameter of the receptive field for each neuron at the hidden layer. They proved that under these constraints, an elementary perceptron cannot solve some problems, such as the connectivity of input images or the parity of pixels in them. In this note, we demonstrated Rosenblatt's first theorem at work, showed how an elementary perceptron can solve a version of the travel maze problem, and analysed the complexity of that solution. We also constructed a deep network algorithm for the same problem. It is much more efficient. The shallow network uses an exponentially large number of neurons on the hidden layer (Rosenblatt's -elements), whereas for the deep network, the second-order polynomial complexity is sufficient. We demonstrated that for the same complex problem, the deep network can be much smaller and reveal a heuristic behind this effect.

摘要

罗森布拉特关于浅层网络全能性的第一个定理指出,如果训练集中不存在差异,基本感知器就能解决任何分类问题。明斯基和佩珀特考虑了对神经输入有限制的基本感知器:连接数量有限,或者隐藏层中每个神经元的感受野直径相对较小。他们证明,在这些约束条件下,基本感知器无法解决一些问题,比如输入图像的连通性或其中像素的奇偶性。在本笔记中,我们展示了罗森布拉特第一个定理的实际应用,说明了基本感知器如何解决旅行迷宫问题的一个版本,并分析了该解决方案的复杂性。我们还为同一问题构建了一种深度网络算法。它效率更高。浅层网络在隐藏层使用指数级数量的神经元(罗森布拉特元素),而对于深度网络,二阶多项式复杂性就足够了。我们证明,对于同一个复杂问题,深度网络可以小得多,并揭示了这种效应背后的一种启发式方法。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ea74/9689667/9431f63a7378/entropy-24-01635-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ea74/9689667/18cfff0464a2/entropy-24-01635-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ea74/9689667/9e6fbe412b84/entropy-24-01635-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ea74/9689667/8912b1d9fae9/entropy-24-01635-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ea74/9689667/bf3804a2fe2d/entropy-24-01635-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ea74/9689667/651a5c5833f6/entropy-24-01635-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ea74/9689667/c622ed35ddaf/entropy-24-01635-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ea74/9689667/9431f63a7378/entropy-24-01635-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ea74/9689667/18cfff0464a2/entropy-24-01635-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ea74/9689667/9e6fbe412b84/entropy-24-01635-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ea74/9689667/8912b1d9fae9/entropy-24-01635-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ea74/9689667/bf3804a2fe2d/entropy-24-01635-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ea74/9689667/651a5c5833f6/entropy-24-01635-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ea74/9689667/c622ed35ddaf/entropy-24-01635-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ea74/9689667/9431f63a7378/entropy-24-01635-g007.jpg

相似文献

1
Rosenblatt's First Theorem and Frugality of Deep Learning.罗森布拉特第一定理与深度学习的节俭性
Entropy (Basel). 2022 Nov 10;24(11):1635. doi: 10.3390/e24111635.
2
Universal perceptron and DNA-like learning algorithm for binary neural networks: LSBF and PBF implementations.用于二元神经网络的通用感知器和类DNA学习算法:LSBF和PBF实现
IEEE Trans Neural Netw. 2009 Oct;20(10):1645-58. doi: 10.1109/tnn.2009.2028886.
3
Optimal convergence of on-line backpropagation.在线反向传播的最优收敛
IEEE Trans Neural Netw. 1996;7(1):251-4. doi: 10.1109/72.478415.
4
Neural Classifiers with Limited Connectivity and Recurrent Readouts.具有有限连接和递归读出功能的神经网络分类器。
J Neurosci. 2018 Nov 14;38(46):9900-9924. doi: 10.1523/JNEUROSCI.3506-17.2018. Epub 2018 Sep 24.
5
A new approach to perceptron training.一种感知器训练的新方法。
IEEE Trans Neural Netw. 2003;14(1):216-21. doi: 10.1109/TNN.2002.806631.
6
Perceptrons from memristors.忆阻器的感知器。
Neural Netw. 2020 Feb;122:273-278. doi: 10.1016/j.neunet.2019.10.013. Epub 2019 Nov 2.
7
Biologically plausible deep learning - But how far can we go with shallow networks?生物学上合理的深度学习——但我们可以在浅层网络中走多远?
Neural Netw. 2019 Oct;118:90-101. doi: 10.1016/j.neunet.2019.06.001. Epub 2019 Jun 20.
8
A learning rule for very simple universal approximators consisting of a single layer of perceptrons.一种由单层感知器组成的非常简单的通用逼近器的学习规则。
Neural Netw. 2008 Jun;21(5):786-95. doi: 10.1016/j.neunet.2007.12.036. Epub 2007 Dec 31.
9
Novel deep neural network based pattern field classification architectures.基于新型深度神经网络的模式场分类架构。
Neural Netw. 2020 Jul;127:82-95. doi: 10.1016/j.neunet.2020.03.011. Epub 2020 Mar 14.
10
Specification of training sets and the number of hidden neurons for multilayer perceptrons.多层感知器训练集的规范及隐藏神经元的数量
Neural Comput. 2001 Dec;13(12):2673-80. doi: 10.1162/089976601317098484.

本文引用的文献

1
Robust and Scalable Learning of Complex Intrinsic Dataset Geometry via ElPiGraph.通过ElPiGraph对复杂内在数据集几何进行稳健且可扩展的学习
Entropy (Basel). 2020 Mar 4;22(3):296. doi: 10.3390/e22030296.
2
Trajectories, bifurcations, and pseudo-time in large clinical datasets: applications to myocardial infarction and diabetes data.大型临床数据集的轨迹、分支和伪时间:在心肌梗死和糖尿病数据中的应用。
Gigascience. 2020 Nov 25;9(11). doi: 10.1093/gigascience/giaa128.
3
Single-cell trajectories reconstruction, exploration and mapping of omics data with STREAM.
使用 STREAM 进行单细胞轨迹重建、组学数据探索和映射。
Nat Commun. 2019 Apr 23;10(1):1903. doi: 10.1038/s41467-019-09670-4.
4
On the complexity of neural network classifiers: a comparison between shallow and deep architectures.神经网络分类器的复杂性研究:浅层结构与深层结构的比较。
IEEE Trans Neural Netw Learn Syst. 2014 Aug;25(8):1553-65. doi: 10.1109/TNNLS.2013.2293637.
5
An integral upper bound for neural network approximation.神经网络逼近的一个积分上界。
Neural Comput. 2009 Oct;21(10):2970-89. doi: 10.1162/neco.2009.04-08-745.
6
Representations and rates of approximation of real-valued Boolean functions by neural networks.神经网络对实值布尔函数的表示及逼近率
Neural Netw. 1998 Jun;11(4):651-659. doi: 10.1016/s0893-6080(98)00039-2.