• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

神经网络集成的内在维度。

The Intrinsic Dimension of Neural Network Ensembles.

作者信息

Tosti Guerra Francesco, Napoletano Andrea, Zaccaria Andrea

机构信息

Department of Physics, Sapienza University of Rome, 00185 Rome, Italy.

Istituto dei Sistemi Complessi (ISC-CNR), UOS Sapienza, 00185 Rome, Italy.

出版信息

Entropy (Basel). 2025 Apr 18;27(4):440. doi: 10.3390/e27040440.

DOI:10.3390/e27040440
PMID:40282675
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12025527/
Abstract

In this work, we propose to study the collective behavior of different ensembles of neural networks. These sets define and live on complex manifolds that evolve through training. Each manifold is characterized by its intrinsic dimension, a measure of the variability of the ensemble and, as such, a measure of the impact of the different training strategies. Indeed, higher intrinsic dimension values imply higher variability among the networks and a larger parameter space coverage. Here, we quantify how much the training choices allow the exploration of the parameter space, finding that a random initialization of the parameters is a stronger source of variability than, progressively, data distortion, dropout, and batch shuffle. We then investigate the combinations of these strategies, the parameters involved, and the impact on the accuracy of the predictions, shedding light on the often-underestimated consequences of these training choices.

摘要

在这项工作中,我们提议研究不同神经网络集合的集体行为。这些集合定义并存在于通过训练而演化的复杂流形上。每个流形都由其内在维度来表征,内在维度是集合变异性的一种度量,因此也是不同训练策略影响的一种度量。实际上,更高的内在维度值意味着网络之间更高的变异性以及更大的参数空间覆盖范围。在此,我们量化训练选择在多大程度上允许对参数空间进行探索,发现参数的随机初始化是比数据失真、随机失活和批次洗牌逐渐更强的变异性来源。然后,我们研究这些策略的组合、所涉及的参数以及对预测准确性的影响,揭示这些训练选择常常被低估的后果。

相似文献

1
The Intrinsic Dimension of Neural Network Ensembles.神经网络集成的内在维度。
Entropy (Basel). 2025 Apr 18;27(4):440. doi: 10.3390/e27040440.
2
Shapley Homology: Topological Analysis of Sample Influence for Neural Networks.Shapley 同调:神经网络样本影响的拓扑分析。
Neural Comput. 2020 Jul;32(7):1355-1378. doi: 10.1162/neco_a_01289. Epub 2020 May 20.
3
Dimension independent bounds for general shallow networks.广义浅层网络的维数无关界。
Neural Netw. 2020 Mar;123:142-152. doi: 10.1016/j.neunet.2019.11.006. Epub 2019 Nov 22.
4
Deep convolutional neural network and IoT technology for healthcare.用于医疗保健的深度卷积神经网络和物联网技术。
Digit Health. 2024 Jan 17;10:20552076231220123. doi: 10.1177/20552076231220123. eCollection 2024 Jan-Dec.
5
Perturbing low dimensional activity manifolds in spiking neuronal networks.扰乱尖峰神经元网络中的低维活动流形。
PLoS Comput Biol. 2019 May 31;15(5):e1007074. doi: 10.1371/journal.pcbi.1007074. eCollection 2019 May.
6
Ensemble competitive learning neural networks with reduced input dimension.
Int J Neural Syst. 1995 Jun;6(2):133-42. doi: 10.1142/s0129065795000111.
7
Soft-margin classification of object manifolds.对象流形的软间隔分类
Phys Rev E. 2022 Aug;106(2-1):024126. doi: 10.1103/PhysRevE.106.024126.
8
A constructive algorithm for training cooperative neural network ensembles.一种用于训练协作神经网络集成的构造性算法。
IEEE Trans Neural Netw. 2003;14(4):820-34. doi: 10.1109/TNN.2003.813832.
9
Network inference with ensembles of bi-clustering trees.基于二部聚类树集成的网络推断。
BMC Bioinformatics. 2019 Oct 28;20(1):525. doi: 10.1186/s12859-019-3104-y.
10
Ensemble machine learning model trained on a new synthesized dataset generalizes well for stress prediction using wearable devices.在新合成数据集上训练的集成机器学习模型,对于使用可穿戴设备进行压力预测具有良好的泛化能力。
J Biomed Inform. 2023 Dec;148:104556. doi: 10.1016/j.jbi.2023.104556. Epub 2023 Dec 2.

本文引用的文献

1
A neural network potential with self-trained atomic fingerprints: A test with the mW water potential.一种具有自训练原子指纹的神经网络势:对毫瓦级水势的测试。
J Chem Phys. 2023 Mar 14;158(10):104501. doi: 10.1063/5.0139245.
2
Unveiling the Structure of Wide Flat Minima in Neural Networks.揭示神经网络中的宽平坦极小值结构。
Phys Rev Lett. 2021 Dec 31;127(27):278301. doi: 10.1103/PhysRevLett.127.278301.
3
High-dimensional dynamics of generalization error in neural networks.神经网络泛化误差的高维动力学。
Neural Netw. 2020 Dec;132:428-446. doi: 10.1016/j.neunet.2020.08.022. Epub 2020 Sep 5.
4
Jamming transition as a paradigm to understand the loss landscape of deep neural networks.作为理解深度神经网络损失景观的范例的阻塞转变。
Phys Rev E. 2019 Jul;100(1-1):012115. doi: 10.1103/PhysRevE.100.012115.
5
A mean field view of the landscape of two-layer neural networks.两层神经网络景观的平均场观点。
Proc Natl Acad Sci U S A. 2018 Aug 14;115(33):E7665-E7671. doi: 10.1073/pnas.1806579115. Epub 2018 Jul 27.
6
Deep Learning for Computer Vision: A Brief Review.深度学习在计算机视觉中的应用综述
Comput Intell Neurosci. 2018 Feb 1;2018:7068349. doi: 10.1155/2018/7068349. eCollection 2018.
7
Estimating the intrinsic dimension of datasets by a minimal neighborhood information.通过最小邻域信息估计数据集的内在维度。
Sci Rep. 2017 Sep 22;7(1):12140. doi: 10.1038/s41598-017-11873-y.
8
Using sketch-map coordinates to analyze and bias molecular dynamics simulations.使用草图坐标分析和纠正分子动力学模拟。
Proc Natl Acad Sci U S A. 2012 Apr 3;109(14):5196-201. doi: 10.1073/pnas.1201152109. Epub 2012 Mar 16.