• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

神经网络学习分类任务中的编码方案。

Coding schemes in neural networks learning classification tasks.

作者信息

van Meegen Alexander, Sompolinsky Haim

机构信息

Center for Brain Science, Harvard University, Cambridge, MA, 02138, USA.

Edmond and Lily Safra Center for Brain Sciences, Hebrew University, Jerusalem, 9190401, Israel.

出版信息

Nat Commun. 2025 Apr 9;16(1):3354. doi: 10.1038/s41467-025-58276-6.

DOI:10.1038/s41467-025-58276-6
PMID:40204730
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11982327/
Abstract

Neural networks posses the crucial ability to generate meaningful representations of task-dependent features. Indeed, with appropriate scaling, supervised learning in neural networks can result in strong, task-dependent feature learning. However, the nature of the emergent representations is still unclear. To understand the effect of learning on representations, we investigate fully-connected, wide neural networks learning classification tasks using the Bayesian framework where learning shapes the posterior distribution of the network weights. Consistent with previous findings, our analysis of the feature learning regime (also known as 'non-lazy' regime) shows that the networks acquire strong, data-dependent features, denoted as coding schemes, where neuronal responses to each input are dominated by its class membership. Surprisingly, the nature of the coding schemes depends crucially on the neuronal nonlinearity. In linear networks, an analog coding scheme of the task emerges; in nonlinear networks, strong spontaneous symmetry breaking leads to either redundant or sparse coding schemes. Our findings highlight how network properties such as scaling of weights and neuronal nonlinearity can profoundly influence the emergent representations.

摘要

神经网络具有生成与任务相关特征的有意义表示的关键能力。事实上,通过适当的缩放,神经网络中的监督学习可以导致强大的、与任务相关的特征学习。然而,涌现表示的本质仍然不清楚。为了理解学习对表示的影响,我们使用贝叶斯框架研究全连接的宽神经网络学习分类任务,其中学习塑造了网络权重的后验分布。与之前的发现一致,我们对特征学习机制(也称为“非惰性”机制)的分析表明,网络获得了强大的、依赖数据的特征,称为编码方案,其中对每个输入的神经元反应由其类别成员主导。令人惊讶的是,编码方案的本质关键取决于神经元非线性。在线性网络中,出现任务的模拟编码方案;在非线性网络中,强烈的自发对称性破缺导致冗余或稀疏编码方案。我们的发现突出了诸如权重缩放和神经元非线性等网络属性如何深刻影响涌现表示。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ff4a/11982327/a373634329fa/41467_2025_58276_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ff4a/11982327/0c53fb313861/41467_2025_58276_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ff4a/11982327/86f777076e24/41467_2025_58276_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ff4a/11982327/31d1523324be/41467_2025_58276_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ff4a/11982327/e7a0e62cb08c/41467_2025_58276_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ff4a/11982327/3591d4396600/41467_2025_58276_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ff4a/11982327/cbd12e71c3c8/41467_2025_58276_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ff4a/11982327/a373634329fa/41467_2025_58276_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ff4a/11982327/0c53fb313861/41467_2025_58276_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ff4a/11982327/86f777076e24/41467_2025_58276_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ff4a/11982327/31d1523324be/41467_2025_58276_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ff4a/11982327/e7a0e62cb08c/41467_2025_58276_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ff4a/11982327/3591d4396600/41467_2025_58276_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ff4a/11982327/cbd12e71c3c8/41467_2025_58276_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/ff4a/11982327/a373634329fa/41467_2025_58276_Fig7_HTML.jpg

相似文献

1
Coding schemes in neural networks learning classification tasks.神经网络学习分类任务中的编码方案。
Nat Commun. 2025 Apr 9;16(1):3354. doi: 10.1038/s41467-025-58276-6.
2
Orthogonal representations for robust context-dependent task performance in brains and neural networks.大脑和神经网络中鲁棒上下文相关任务性能的正交表示。
Neuron. 2022 Apr 6;110(7):1258-1270.e11. doi: 10.1016/j.neuron.2022.01.005. Epub 2022 Jan 31.
3
From lazy to rich to exclusive task representations in neural networks and neural codes.从懒惰到丰富,再到神经网络和神经代码的专属任务表示。
Curr Opin Neurobiol. 2023 Dec;83:102780. doi: 10.1016/j.conb.2023.102780. Epub 2023 Sep 25.
4
Dynamics of Supervised and Reinforcement Learning in the Non-Linear Perceptron.非线性感知器中监督学习与强化学习的动态变化
ArXiv. 2025 Feb 24:arXiv:2409.03749v3.
5
Semi Supervised Learning with Deep Embedded Clustering for Image Classification and Segmentation.用于图像分类和分割的深度嵌入聚类半监督学习
IEEE Access. 2019;7:11093-11104. doi: 10.1109/ACCESS.2019.2891970. Epub 2019 Jan 9.
6
fMRI volume classification using a 3D convolutional neural network robust to shifted and scaled neuronal activations.使用对移位和缩放神经元激活具有鲁棒性的 3D 卷积神经网络进行 fMRI 体积分类。
Neuroimage. 2020 Dec;223:117328. doi: 10.1016/j.neuroimage.2020.117328. Epub 2020 Sep 5.
7
Provable Multi-Task Representation Learning by Two-Layer ReLU Neural Networks.基于两层ReLU神经网络的可证明多任务表示学习
Proc Mach Learn Res. 2024 Jul;235:9292-9345.
8
Temporal Coding in Spiking Neural Networks With Alpha Synaptic Function: Learning With Backpropagation.尖峰神经网络中的时间编码与阿尔法突触功能:基于反向传播的学习。
IEEE Trans Neural Netw Learn Syst. 2022 Oct;33(10):5939-5952. doi: 10.1109/TNNLS.2021.3071976. Epub 2022 Oct 5.
9
Accelerating the training of feedforward neural networks using generalized Hebbian rules for initializing the internal representations.使用广义赫布规则初始化内部表示以加速前馈神经网络的训练。
IEEE Trans Neural Netw. 1996;7(2):419-26. doi: 10.1109/72.485677.
10
Learning-induced reorganization of number neurons and emergence of numerical representations in a biologically inspired neural network.在一个受生物启发的神经网络中,学习引起的数字神经元的重组和数值表示的出现。
Nat Commun. 2023 Jun 29;14(1):3843. doi: 10.1038/s41467-023-39548-5.

引用本文的文献

1
Summary statistics of learning link changing neural representations to behavior.将学习链接改变神经表征与行为联系起来的汇总统计数据。
ArXiv. 2025 Jul 14:arXiv:2504.16920v2.

本文引用的文献

1
Bayesian interpolation with deep linear networks.基于深度线性网络的贝叶斯插值。
Proc Natl Acad Sci U S A. 2023 Jun 6;120(23):e2301345120. doi: 10.1073/pnas.2301345120. Epub 2023 May 30.
2
Separation of scales and a thermodynamic description of feature learning in some CNNs.尺度分离和一些 CNN 中特征学习的热力学描述。
Nat Commun. 2023 Feb 17;14(1):908. doi: 10.1038/s41467-023-36361-y.
3
Neural representational geometry underlies few-shot concept learning.神经表象几何是少样本概念学习的基础。
Proc Natl Acad Sci U S A. 2022 Oct 25;119(43):e2200800119. doi: 10.1073/pnas.2200800119. Epub 2022 Oct 17.
4
Contrasting random and learned features in deep Bayesian linear regression.深度贝叶斯线性回归中随机特征与学习特征的对比
Phys Rev E. 2022 Jun;105(6-1):064118. doi: 10.1103/PhysRevE.105.064118.
5
Predicting the outputs of finite deep neural networks trained with noisy gradients.预测使用噪声梯度训练的有限深度神经网络的输出。
Phys Rev E. 2021 Dec;104(6-1):064301. doi: 10.1103/PhysRevE.104.064301.
6
Prevalence of neural collapse during the terminal phase of deep learning training.深度学习训练末期的神经崩溃的普遍性。
Proc Natl Acad Sci U S A. 2020 Oct 6;117(40):24652-24663. doi: 10.1073/pnas.2015509117. Epub 2020 Sep 21.
7
Reconciling modern machine-learning practice and the classical bias-variance trade-off.调和现代机器学习实践与经典偏差-方差权衡。
Proc Natl Acad Sci U S A. 2019 Aug 6;116(32):15849-15854. doi: 10.1073/pnas.1903070116. Epub 2019 Jul 24.
8
A mean field view of the landscape of two-layer neural networks.两层神经网络景观的平均场观点。
Proc Natl Acad Sci U S A. 2018 Aug 14;115(33):E7665-E7671. doi: 10.1073/pnas.1806579115. Epub 2018 Jul 27.
9
Deep learning.深度学习。
Nature. 2015 May 28;521(7553):436-44. doi: 10.1038/nature14539.
10
Representation learning: a review and new perspectives.表示学习:综述与新视角。
IEEE Trans Pattern Anal Mach Intell. 2013 Aug;35(8):1798-828. doi: 10.1109/TPAMI.2013.50.