• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于持续图置信集的分类问题中神经网络的实验稳定性分析。

Experimental stability analysis of neural networks in classification problems with confidence sets for persistence diagrams.

机构信息

Graduate School of Engineering, Nagoya University, Nagoya 464-8603, Japan.

Institute of Innovation for Future Society, Nagoya University, Nagoya 464-8601, Japan.

出版信息

Neural Netw. 2021 Nov;143:42-51. doi: 10.1016/j.neunet.2021.05.007. Epub 2021 May 12.

DOI:10.1016/j.neunet.2021.05.007
PMID:34087528
Abstract

We investigate classification performance of neural networks (NNs) based on topological insight in an attempt to guarantee stability of their inference. NNs which can accurately classify a dataset map it into a hidden space while disentangling intertwined data. NNs sometimes acquire forcible mapping to disentangle the data, and this forcible mapping generates outliers. The mapping around the outliers is unstable because the outputs change drastically. Hence, we define stable NNs to mean that they do not generate outliers. To investigate the possibility of the existence of outliers, we use persistent homology and a method to estimate the confidence set for persistence diagrams. The combined use enables us to test whether the focused geometry is topologically simple, that is, no outliers. In this work, we use the MNIST and CIFAR-10 datasets and investigate the relationship between the classification performance and the topological characteristics with several NNs. Investigation results with the MNIST dataset show that the test accuracy of all the networks is superior, exceeding 98%, even though the transformed dataset is not topologically simple. Results with the CIFAR-10 dataset also show that the possibility of the existence of outliers is shown in the mapping by the accurate convolutional NNs. Therefore, we conclude that the presented investigation is necessary to guarantee that the NNs, in particular deep NNs, do not acquire unstable mapping for forcible classification.

摘要

我们基于拓扑学的洞察力研究神经网络(NN)的分类性能,试图保证其推理的稳定性。能够准确分类数据集的神经网络将其映射到隐藏空间,同时解缠交织的数据。神经网络有时会强行进行映射以解缠数据,而这种强行映射会产生异常值。异常值周围的映射是不稳定的,因为输出会发生剧烈变化。因此,我们将稳定的神经网络定义为不会产生异常值的神经网络。为了研究异常值存在的可能性,我们使用持久同调以及一种估计持续图置信集的方法。联合使用这两种方法可以测试聚焦几何是否具有拓扑简单性,即没有异常值。在这项工作中,我们使用 MNIST 和 CIFAR-10 数据集,并使用几种神经网络研究分类性能和拓扑特征之间的关系。使用 MNIST 数据集的研究结果表明,所有网络的测试准确性都很高,超过 98%,即使转换后的数据集不具有拓扑简单性。使用 CIFAR-10 数据集的结果也表明,准确的卷积神经网络在映射中存在异常值存在的可能性。因此,我们得出结论,需要进行这种研究以保证神经网络,特别是深度神经网络,不会因强行分类而获得不稳定的映射。

相似文献

1
Experimental stability analysis of neural networks in classification problems with confidence sets for persistence diagrams.基于持续图置信集的分类问题中神经网络的实验稳定性分析。
Neural Netw. 2021 Nov;143:42-51. doi: 10.1016/j.neunet.2021.05.007. Epub 2021 May 12.
2
Stable tensor neural networks for efficient deep learning.用于高效深度学习的稳定张量神经网络。
Front Big Data. 2024 May 30;7:1363978. doi: 10.3389/fdata.2024.1363978. eCollection 2024.
3
A Survey of Stochastic Computing Neural Networks for Machine Learning Applications.用于机器学习应用的随机计算神经网络调查。
IEEE Trans Neural Netw Learn Syst. 2021 Jul;32(7):2809-2824. doi: 10.1109/TNNLS.2020.3009047. Epub 2021 Jul 6.
4
A Gradient-Guided Evolutionary Approach to Training Deep Neural Networks.基于梯度的深度神经网络进化训练方法。
IEEE Trans Neural Netw Learn Syst. 2022 Sep;33(9):4861-4875. doi: 10.1109/TNNLS.2021.3061630. Epub 2022 Aug 31.
5
High frequency accuracy and loss data of random neural networks trained on image datasets.在图像数据集上训练的随机神经网络的高频精度和损失数据。
Data Brief. 2022 Jan 5;40:107780. doi: 10.1016/j.dib.2021.107780. eCollection 2022 Feb.
6
Ensemble learning of diffractive optical networks.衍射光学网络的集成学习
Light Sci Appl. 2021 Jan 11;10(1):14. doi: 10.1038/s41377-020-00446-w.
7
The difficulty of computing stable and accurate neural networks: On the barriers of deep learning and Smale's 18th problem.计算稳定且准确的神经网络的困难:深度学习与斯梅尔第 18 问题的障碍。
Proc Natl Acad Sci U S A. 2022 Mar 22;119(12):e2107151119. doi: 10.1073/pnas.2107151119. Epub 2022 Mar 16.
8
Understanding Neural Networks and Individual Neuron Importance via Information-Ordered Cumulative Ablation.通过信息有序累积消融理解神经网络和单个神经元的重要性。
IEEE Trans Neural Netw Learn Syst. 2022 Dec;33(12):7842-7852. doi: 10.1109/TNNLS.2021.3088685. Epub 2022 Nov 30.
9
Image Classification Using Multiple Convolutional Neural Networks on the Fashion-MNIST Dataset.基于 Fashion-MNIST 数据集的多卷积神经网络图像分类。
Sensors (Basel). 2022 Dec 6;22(23):9544. doi: 10.3390/s22239544.
10
Redundant feature pruning for accelerated inference in deep neural networks.冗余特征剪枝在深度神经网络中的加速推理。
Neural Netw. 2019 Oct;118:148-158. doi: 10.1016/j.neunet.2019.04.021. Epub 2019 May 9.