• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

使用 phidelta 图发现多层感知器中的相关模式。

Using phidelta diagrams to discover relevant patterns in multilayer perceptrons.

机构信息

Department of Mathematics and Computer Science, University of Cagliari, Cagliari, Italy.

出版信息

Sci Rep. 2020 Dec 7;10(1):21334. doi: 10.1038/s41598-020-76517-0.

DOI:10.1038/s41598-020-76517-0
PMID:33288773
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7721750/
Abstract

Understanding the inner behaviour of multilayer perceptrons during and after training is a goal of paramount importance for many researchers worldwide. This article experimentally shows that relevant patterns emerge upon training, which are typically related to the underlying problem difficulty. The occurrence of these patterns is highlighted by means of [Formula: see text] diagrams, a 2D graphical tool originally devised to support the work of researchers on classifier performance evaluation and on feature assessment. The underlying assumption being that multilayer perceptrons are powerful engines for feature encoding, hidden layers have been inspected as they were in fact hosting new input features. Interestingly, there are problems that appear difficult if dealt with using a single hidden layer, whereas they turn out to be easier upon the addition of further layers. The experimental findings reported in this article give further support to the standpoint according to which implementing neural architectures with multiple layers may help to boost their generalisation ability. A generic training strategy inspired by some relevant recommendations of deep learning has also been devised. A basic implementation of this strategy has been thoroughly used during the experiments aimed at identifying relevant patterns inside multilayer perceptrons. Further experiments performed in a comparative setting have shown that it could be adopted as viable alternative to the classical backpropagation algorithm.

摘要

了解多层感知机在训练期间和训练后的内部行为,是全球许多研究人员的首要目标。本文通过实验表明,相关模式在训练过程中出现,这些模式通常与底层问题的难度有关。这些模式的出现是通过[公式:见正文]图来突出的,这是一种二维图形工具,最初设计用于支持分类器性能评估和特征评估方面的研究人员的工作。假设多层感知机是强大的特征编码引擎,因此检查了隐藏层,因为它们实际上承载了新的输入特征。有趣的是,有些问题如果只用单个隐藏层来处理会显得很困难,但如果再增加更多的层,这些问题就会变得更容易解决。本文报告的实验结果进一步支持了这样一种观点,即采用具有多个层的神经网络架构可能有助于提高它们的泛化能力。还设计了一种受深度学习相关建议启发的通用训练策略。在旨在识别多层感知机内部相关模式的实验中,彻底使用了这种策略的基本实现。在比较环境中进行的进一步实验表明,它可以作为经典反向传播算法的可行替代方案。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/20c8/7721750/7ebadfcea22a/41598_2020_76517_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/20c8/7721750/e05e7af76486/41598_2020_76517_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/20c8/7721750/41b558df58fd/41598_2020_76517_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/20c8/7721750/edb11fe76729/41598_2020_76517_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/20c8/7721750/0cf40ca48bfd/41598_2020_76517_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/20c8/7721750/191bc33271ca/41598_2020_76517_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/20c8/7721750/1fd351c6d88b/41598_2020_76517_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/20c8/7721750/7ebadfcea22a/41598_2020_76517_Fig11_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/20c8/7721750/e05e7af76486/41598_2020_76517_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/20c8/7721750/41b558df58fd/41598_2020_76517_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/20c8/7721750/edb11fe76729/41598_2020_76517_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/20c8/7721750/0cf40ca48bfd/41598_2020_76517_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/20c8/7721750/191bc33271ca/41598_2020_76517_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/20c8/7721750/1fd351c6d88b/41598_2020_76517_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/20c8/7721750/7ebadfcea22a/41598_2020_76517_Fig11_HTML.jpg

相似文献

1
Using phidelta diagrams to discover relevant patterns in multilayer perceptrons.使用 phidelta 图发现多层感知器中的相关模式。
Sci Rep. 2020 Dec 7;10(1):21334. doi: 10.1038/s41598-020-76517-0.
2
Devising novel performance measures for assessing the behavior of multilayer perceptrons trained on regression tasks.设计用于评估基于回归任务训练的多层感知机行为的新型性能指标。
PLoS One. 2023 May 18;18(5):e0285471. doi: 10.1371/journal.pone.0285471. eCollection 2023.
3
A new error function at hidden layers for past training of multilayer perceptrons.一种用于多层感知器过往训练的隐藏层新误差函数。
IEEE Trans Neural Netw. 1999;10(4):960-4. doi: 10.1109/72.774272.
4
Statistical active learning in multilayer perceptrons.多层感知器中的统计主动学习
IEEE Trans Neural Netw. 2000;11(1):17-26. doi: 10.1109/72.822506.
5
Fast training of multilayer perceptrons.多层感知器的快速训练
IEEE Trans Neural Netw. 1997;8(6):1314-20. doi: 10.1109/72.641454.
6
On the initialization and optimization of multilayer perceptrons.关于多层感知器的初始化与优化
IEEE Trans Neural Netw. 1994;5(5):738-51. doi: 10.1109/72.317726.
7
Performance evaluation of multilayer perceptrons in signal detection and classification.多层感知器在信号检测与分类中的性能评估
IEEE Trans Neural Netw. 1995;6(2):381-6. doi: 10.1109/72.363473.
8
A learning rule for very simple universal approximators consisting of a single layer of perceptrons.一种由单层感知器组成的非常简单的通用逼近器的学习规则。
Neural Netw. 2008 Jun;21(5):786-95. doi: 10.1016/j.neunet.2007.12.036. Epub 2007 Dec 31.
9
Specification of training sets and the number of hidden neurons for multilayer perceptrons.多层感知器训练集的规范及隐藏神经元的数量
Neural Comput. 2001 Dec;13(12):2673-80. doi: 10.1162/089976601317098484.
10
An accelerated learning algorithm for multilayer perceptrons: optimization layer by layer.一种用于多层感知器的加速学习算法:逐层优化
IEEE Trans Neural Netw. 1995;6(1):31-42. doi: 10.1109/72.363452.

引用本文的文献

1
Devising novel performance measures for assessing the behavior of multilayer perceptrons trained on regression tasks.设计用于评估基于回归任务训练的多层感知机行为的新型性能指标。
PLoS One. 2023 May 18;18(5):e0285471. doi: 10.1371/journal.pone.0285471. eCollection 2023.

本文引用的文献

1
Enhancing Explainability of Neural Networks Through Architecture Constraints.通过架构约束增强神经网络的可解释性。
IEEE Trans Neural Netw Learn Syst. 2021 Jun;32(6):2610-2621. doi: 10.1109/TNNLS.2020.3007259. Epub 2021 Jun 2.
2
On Kernel Method-Based Connectionist Models and Supervised Deep Learning Without Backpropagation.基于核方法的连接主义模型与无反向传播的监督深度学习
Neural Comput. 2020 Jan;32(1):97-135. doi: 10.1162/neco_a_01250. Epub 2019 Nov 8.
3
Understanding autoencoders with information theoretic concepts.理解基于信息论概念的自动编码器。
Neural Netw. 2019 Sep;117:104-123. doi: 10.1016/j.neunet.2019.05.003. Epub 2019 May 15.
4
Random synaptic feedback weights support error backpropagation for deep learning.随机突触反馈权重支持深度学习的误差反向传播。
Nat Commun. 2016 Nov 8;7:13276. doi: 10.1038/ncomms13276.
5
The precision-recall plot is more informative than the ROC plot when evaluating binary classifiers on imbalanced datasets.在不平衡数据集上评估二元分类器时,精确率-召回率曲线比ROC曲线更具信息性。
PLoS One. 2015 Mar 4;10(3):e0118432. doi: 10.1371/journal.pone.0118432. eCollection 2015.
6
100% classification accuracy considered harmful: the normalized information transfer factor explains the accuracy paradox.100% 的分类准确率可能有害:归一化信息传递因子解释了准确率悖论。
PLoS One. 2014 Jan 10;9(1):e84217. doi: 10.1371/journal.pone.0084217. eCollection 2014.
7
Reducing the dimensionality of data with neural networks.使用神经网络降低数据维度。
Science. 2006 Jul 28;313(5786):504-7. doi: 10.1126/science.1127647.
8
A fast learning algorithm for deep belief nets.一种用于深度信念网络的快速学习算法。
Neural Comput. 2006 Jul;18(7):1527-54. doi: 10.1162/neco.2006.18.7.1527.
9
Digital selection and analogue amplification coexist in a cortex-inspired silicon circuit.数字选择与模拟放大共存于一个受皮层启发的硅电路中。
Nature. 2000 Jun 22;405(6789):947-51. doi: 10.1038/35016072.
10
Auto-association by multilayer perceptrons and singular value decomposition.多层感知器和奇异值分解的自联想
Biol Cybern. 1988;59(4-5):291-4. doi: 10.1007/BF00332918.