• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

神经网络中电路扩展用于学习的新作用。

New role for circuit expansion for learning in neural networks.

机构信息

Center for Brain Science, Harvard University, Cambridge, Massachusetts 02138, USA.

Department of Physics, Harvard University, Cambridge, Massachusetts 02138, USA.

出版信息

Phys Rev E. 2021 Feb;103(2-1):022404. doi: 10.1103/PhysRevE.103.022404.

DOI:10.1103/PhysRevE.103.022404
PMID:33736047
Abstract

Many sensory pathways in the brain include sparsely active populations of neurons downstream from the input stimuli. The biological purpose of this expanded structure is unclear, but it may be beneficial due to the increased expressive power of the network. In this work, we show that certain ways of expanding a neural network can improve its generalization performance even when the expanded structure is pruned after the learning period. To study this setting, we use a teacher-student framework where a perceptron teacher network generates labels corrupted with small amounts of noise. We then train a student network structurally matched to the teacher. In this scenario, the student can achieve optimal accuracy if given the teacher's synaptic weights. We find that sparse expansion of the input layer of a student perceptron network both increases its capacity and improves the generalization performance of the network when learning a noisy rule from a teacher perceptron when the expansion is pruned after learning. We find similar behavior when the expanded units are stochastic and uncorrelated with the input and analyze this network in the mean-field limit. By solving the mean-field equations, we show that the generalization error of the stochastic expanded student network continues to drop as the size of the network increases. This improvement in generalization performance occurs despite the increased complexity of the student network relative to the teacher it is trying to learn. We show that this effect is closely related to the addition of slack variables in artificial neural networks and suggest possible implications for artificial and biological neural networks.

摘要

大脑中的许多感觉通路包括输入刺激后下游稀疏活跃的神经元群体。这种扩展结构的生物学目的尚不清楚,但由于网络的表达能力增强,它可能是有益的。在这项工作中,我们表明,即使在学习期后修剪扩展结构,扩展神经网络的某些方法也可以提高其泛化性能。为了研究这种情况,我们使用了一种师生框架,其中感知器教师网络生成带有少量噪声污染的标签。然后,我们训练与教师结构匹配的学生网络。在这种情况下,如果学生获得了教师的突触权重,它可以达到最佳的准确性。我们发现,当学生感知器网络的输入层稀疏扩展时,当从感知器教师学习带有噪声的规则时,扩展在学习后被修剪,可以提高网络的容量和泛化性能。当扩展单元是随机的且与输入无关时,我们发现了类似的行为,并在平均场极限下分析了这个网络。通过求解平均场方程,我们表明,即使相对于其试图学习的教师,学生网络的复杂性增加,随机扩展的学生网络的泛化误差仍继续下降。尽管相对于它试图学习的教师,学生网络的复杂性增加,但泛化性能的这种提高确实发生了。我们表明,这种效应与人工神经网络中的松弛变量的添加密切相关,并为人工和生物神经网络提出了可能的启示。

相似文献

1
New role for circuit expansion for learning in neural networks.神经网络中电路扩展用于学习的新作用。
Phys Rev E. 2021 Feb;103(2-1):022404. doi: 10.1103/PhysRevE.103.022404.
2
Dynamics of Supervised and Reinforcement Learning in the Non-Linear Perceptron.非线性感知器中监督学习与强化学习的动态变化
ArXiv. 2025 Feb 24:arXiv:2409.03749v3.
3
A learning rule for very simple universal approximators consisting of a single layer of perceptrons.一种由单层感知器组成的非常简单的通用逼近器的学习规则。
Neural Netw. 2008 Jun;21(5):786-95. doi: 10.1016/j.neunet.2007.12.036. Epub 2007 Dec 31.
4
Dynamics of stochastic gradient descent for two-layer neural networks in the teacher-student setup.师生模式下两层神经网络的随机梯度下降动力学
J Stat Mech. 2020 Dec;2020(12):124010. doi: 10.1088/1742-5468/abc61e. Epub 2020 Dec 21.
5
Dendritic normalisation improves learning in sparsely connected artificial neural networks.树突正常化可改善稀疏连接人工神经网络的学习能力。
PLoS Comput Biol. 2021 Aug 9;17(8):e1009202. doi: 10.1371/journal.pcbi.1009202. eCollection 2021 Aug.
6
Learning with incomplete information in the committee machine.
Biol Cybern. 2009 Dec;101(5-6):401-10. doi: 10.1007/s00422-009-0345-2. Epub 2009 Nov 4.
7
Robust Student Network Learning.稳健学生网络学习。
IEEE Trans Neural Netw Learn Syst. 2020 Jul;31(7):2455-2468. doi: 10.1109/TNNLS.2019.2929114. Epub 2019 Aug 16.
8
Efficient Combination of CNN and Transformer for Dual-Teacher Uncertainty-guided Semi-supervised Medical Image Segmentation.基于 CNN 和 Transformer 的高效组合用于双教师不确定性引导的半监督医学图像分割。
Comput Methods Programs Biomed. 2022 Nov;226:107099. doi: 10.1016/j.cmpb.2022.107099. Epub 2022 Sep 2.
9
Learning Student Networks via Feature Embedding.通过特征嵌入学习学生网络。
IEEE Trans Neural Netw Learn Syst. 2021 Jan;32(1):25-35. doi: 10.1109/TNNLS.2020.2970494. Epub 2021 Jan 4.
10
Deep convolutional neural network and IoT technology for healthcare.用于医疗保健的深度卷积神经网络和物联网技术。
Digit Health. 2024 Jan 17;10:20552076231220123. doi: 10.1177/20552076231220123. eCollection 2024 Jan-Dec.

引用本文的文献

1
The information theory of developmental pruning: Optimizing global network architectures using local synaptic rules.发育修剪的信息理论:使用局部突触规则优化全局网络结构。
PLoS Comput Biol. 2021 Oct 11;17(10):e1009458. doi: 10.1371/journal.pcbi.1009458. eCollection 2021 Oct.