• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

结构网络测量揭示了彩票多层感知器中重尾度分布的出现。

Structural network measures reveal the emergence of heavy-tailed degree distributions in lottery ticket multilayer perceptrons.

作者信息

Kang Chris, Moore Jasmine A, Robertson Samuel, Wilms Matthias, Towlson Emma K, Forkert Nils D

机构信息

Hotchkiss Brain Institute, University of Calgary, Calgary, AB, Canada; Department of Radiology, University of Calgary, Calgary, AB, Canada.

Hotchkiss Brain Institute, University of Calgary, Calgary, AB, Canada; Department of Radiology, University of Calgary, Calgary, AB, Canada; Department of Biomedical Engineering, University of Calgary, Calgary, AB, Canada; Alberta Children's Hospital Research Institute, University of Calgary, Calgary, AB, Canada.

出版信息

Neural Netw. 2025 Jul;187:107308. doi: 10.1016/j.neunet.2025.107308. Epub 2025 Mar 12.

DOI:10.1016/j.neunet.2025.107308
PMID:40120548
Abstract

Artificial neural networks (ANNs) were originally modeled after their biological counterparts, but have since conceptually diverged in many ways. The resulting network architectures are not well understood, and furthermore, we lack the quantitative tools to characterize their structures. Network science provides an ideal mathematical framework with which to characterize systems of interacting components, and has transformed our understanding across many domains, including the mammalian brain. Yet, little has been done to bring network science to ANNs. In this work, we propose tools that leverage and adapt network science methods to measure both global- and local-level characteristics of ANNs. Specifically, we focus on the structures of efficient multilayer perceptrons as a case study, which are sparse and systematically pruned such that they share many characteristics with real-world networks. We use adapted network science metrics to show that the pruning process leads to the emergence of a spanning subnetwork (lottery ticket multilayer perceptrons) with complex architecture. This complex network exhibits global and local characteristics, including heavy-tailed nodal degree distributions and dominant weighted pathways, that mirror patterns observed in human neuronal connectivity. Furthermore, alterations in network metrics precede catastrophic decay in performance as the network is heavily pruned. This network science-driven approach to the analysis of artificial neural networks serves as a valuable tool to establish and improve biological fidelity, increase the interpretability, and assess the performance of artificial neural networks. Significance Statement Artificial neural network architectures have become increasingly complex, often diverging from their biological counterparts in many ways. To design plausible "brain-like" architectures, whether to advance neuroscience research or to improve explainability, it is essential that these networks optimally resemble their biological counterparts. Network science tools offer valuable information about interconnected systems, including the brain, but have not attracted much attention for analyzing artificial neural networks. Here, we present the significance of our work: •We adapt network science tools to analyze the structural characteristics of artificial neural networks. •We demonstrate that organizational patterns similar to those observed in the mammalian brain emerge through the pruning process alone. The convergence on these complex network features in both artificial neural networks and biological brain networks is compelling evidence for their optimality in information processing capabilities. •Our approach is a significant first step towards a network science-based understanding of artificial neural networks, and has the potential to shed light on the biological fidelity of artificial neural networks.

摘要

人工神经网络(ANNs)最初是仿照其生物对应物构建的,但此后在概念上已在许多方面出现分歧。由此产生的网络架构尚未得到很好的理解,此外,我们缺乏表征其结构的定量工具。网络科学提供了一个理想的数学框架,可用于表征相互作用组件的系统,并在包括哺乳动物大脑在内的许多领域改变了我们的理解。然而,将网络科学应用于人工神经网络的工作却很少。在这项工作中,我们提出了一些工具,这些工具利用并改编网络科学方法来测量人工神经网络的全局和局部特征。具体而言,我们以高效多层感知器的结构为例进行研究,这些多层感知器是稀疏的且经过系统修剪,因此它们与现实世界的网络具有许多共同特征。我们使用改编后的网络科学指标表明,修剪过程导致出现了一个具有复杂架构的生成子网络(中奖彩票多层感知器)。这个复杂网络展现出全局和局部特征,包括重尾节点度分布和占主导地位的加权路径,这些特征反映了在人类神经元连接中观察到的模式。此外,随着网络被大量修剪,网络指标的变化先于性能的灾难性衰减。这种由网络科学驱动的人工神经网络分析方法,是建立和提高生物逼真度、增强可解释性以及评估人工神经网络性能的宝贵工具。

意义声明 人工神经网络架构变得越来越复杂,在许多方面常常与其生物对应物有所不同。为了设计出合理的“类脑”架构,无论是推进神经科学研究还是提高可解释性,这些网络最佳地类似于其生物对应物至关重要。网络科学工具为包括大脑在内的相互连接系统提供了有价值的信息,但在分析人工神经网络方面尚未引起太多关注。在此,我们阐述我们工作的意义:

• 我们改编网络科学工具来分析人工神经网络的结构特征。

• 我们证明,仅通过修剪过程就会出现与哺乳动物大脑中观察到的类似的组织模式。人工神经网络和生物脑网络在这些复杂网络特征上的趋同,有力地证明了它们在信息处理能力方面的最优性。

• 我们的方法是基于网络科学理解人工神经网络的重要第一步,并且有可能揭示人工神经网络的生物逼真度。

相似文献

1
Structural network measures reveal the emergence of heavy-tailed degree distributions in lottery ticket multilayer perceptrons.结构网络测量揭示了彩票多层感知器中重尾度分布的出现。
Neural Netw. 2025 Jul;187:107308. doi: 10.1016/j.neunet.2025.107308. Epub 2025 Mar 12.
2
Neural Classifiers with Limited Connectivity and Recurrent Readouts.具有有限连接和递归读出功能的神经网络分类器。
J Neurosci. 2018 Nov 14;38(46):9900-9924. doi: 10.1523/JNEUROSCI.3506-17.2018. Epub 2018 Sep 24.
3
Sparse connectivity enables efficient information processing in cortex-like artificial neural networks.稀疏连接能够在类皮质人工神经网络中实现高效的信息处理。
Front Neural Circuits. 2025 Mar 13;19:1528309. doi: 10.3389/fncir.2025.1528309. eCollection 2025.
4
Exploring continual learning strategies in artificial neural networks through graph-based analysis of connectivity: Insights from a brain-inspired perspective.通过基于图的连通性分析探索人工神经网络中的持续学习策略:来自大脑启发视角的见解
Neural Netw. 2025 May;185:107125. doi: 10.1016/j.neunet.2025.107125. Epub 2025 Jan 15.
5
Efficient coding in biophysically realistic excitatory-inhibitory spiking networks.生物物理逼真的兴奋性-抑制性脉冲发放网络中的高效编码
Elife. 2025 Mar 7;13:RP99545. doi: 10.7554/eLife.99545.
6
Role of short-term plasticity and slow temporal dynamics in enhancing time series prediction with a brain-inspired recurrent neural network.短期可塑性和缓慢时间动态在增强受脑启发的递归神经网络的时间序列预测中的作用。
Chaos. 2025 Feb 1;35(2). doi: 10.1063/5.0233158.
7
Beyond multilayer perceptrons: Investigating complex topologies in neural networks.超越多层感知器:探究神经网络中的复杂拓扑结构。
Neural Netw. 2024 Mar;171:215-228. doi: 10.1016/j.neunet.2023.12.012. Epub 2023 Dec 9.
8
Synthetic biological neural networks: From current implementations to future perspectives.人工合成生物学神经网络:从当前实现到未来展望。
Biosystems. 2024 Mar;237:105164. doi: 10.1016/j.biosystems.2024.105164. Epub 2024 Feb 23.
9
Rethinking the performance comparison between SNNS and ANNS.重新思考 SNNS 和 ANNS 的性能比较。
Neural Netw. 2020 Jan;121:294-307. doi: 10.1016/j.neunet.2019.09.005. Epub 2019 Sep 19.
10
Artificial Neural Networks for Neuroscientists: A Primer.人工神经网络:神经科学家入门指南。
Neuron. 2020 Sep 23;107(6):1048-1070. doi: 10.1016/j.neuron.2020.09.005.