• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过在输出层附近执行池化决策来提高准确率。

Enhancing the accuracies by performing pooling decisions adjacent to the output layer.

作者信息

Meir Yuval, Tzach Yarden, Gross Ronit D, Tevet Ofek, Vardi Roni, Kanter Ido

机构信息

Department of Physics, Bar-Ilan University, 52900, Ramat Gan, Israel.

Gonda Interdisciplinary Brain Research Center, Bar-Ilan University, 52900, Ramat Gan, Israel.

出版信息

Sci Rep. 2023 Aug 31;13(1):13385. doi: 10.1038/s41598-023-40566-y.

DOI:10.1038/s41598-023-40566-y
PMID:37652973
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10471572/
Abstract

Learning classification tasks of [Formula: see text] inputs typically consist of [Formula: see text]) max-pooling (MP) operators along the entire feedforward deep architecture. Here we show, using the CIFAR-10 database, that pooling decisions adjacent to the last convolutional layer significantly enhance accuracies. In particular, average accuracies of the advanced-VGG with [Formula: see text] layers (A-VGGm) architectures are 0.936, 0.940, 0.954, 0.955, and 0.955 for m = 6, 8, 14, 13, and 16, respectively. The results indicate A-VGG8's accuracy is superior to VGG16's, and that the accuracies of A-VGG13 and A-VGG16 are equal, and comparable to that of Wide-ResNet16. In addition, replacing the three fully connected (FC) layers with one FC layer, A-VGG6 and A-VGG14, or with several linear activation FC layers, yielded similar accuracies. These significantly enhanced accuracies stem from training the most influential input-output routes, in comparison to the inferior routes selected following multiple MP decisions along the deep architecture. In addition, accuracies are sensitive to the order of the non-commutative MP and average pooling operators adjacent to the output layer, varying the number and location of training routes. The results call for the reexamination of previously proposed deep architectures and their accuracies by utilizing the proposed pooling strategy adjacent to the output layer.

摘要

学习[公式:见原文]输入的分类任务通常由沿整个前馈深度架构的[公式:见原文]最大池化(MP)算子组成。在这里,我们使用CIFAR - 10数据库表明,与最后一个卷积层相邻的池化决策显著提高了准确率。具体而言,对于m = 6、8、14、13和16,具有[公式:见原文]层的高级VGG(A - VGGm)架构的平均准确率分别为0.936、0.940、0.954、0.955和0.955。结果表明A - VGG8的准确率优于VGG16,并且A - VGG13和A - VGG16的准确率相等,且与Wide - ResNet16的准确率相当。此外,用一个全连接(FC)层替换A - VGG6和A - VGG14的三个全连接层,或者用几个线性激活的FC层,产生了相似的准确率。与沿着深度架构遵循多个MP决策所选择的较差路径相比,这些显著提高的准确率源于训练最具影响力的输入 - 输出路径。此外,准确率对与输出层相邻的非交换MP和平均池化算子的顺序敏感,这会改变训练路径的数量和位置。结果呼吁通过利用与输出层相邻的提议池化策略重新审视先前提出的深度架构及其准确率。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3f8d/10471572/a6dedae3819b/41598_2023_40566_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3f8d/10471572/56a7af6e2bab/41598_2023_40566_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3f8d/10471572/8be24a504c8e/41598_2023_40566_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3f8d/10471572/a6dedae3819b/41598_2023_40566_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3f8d/10471572/56a7af6e2bab/41598_2023_40566_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3f8d/10471572/8be24a504c8e/41598_2023_40566_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/3f8d/10471572/a6dedae3819b/41598_2023_40566_Fig3_HTML.jpg

相似文献

1
Enhancing the accuracies by performing pooling decisions adjacent to the output layer.通过在输出层附近执行池化决策来提高准确率。
Sci Rep. 2023 Aug 31;13(1):13385. doi: 10.1038/s41598-023-40566-y.
2
Learning on tree architectures outperforms a convolutional feedforward network.基于树结构的学习优于卷积前馈网络。
Sci Rep. 2023 Jan 30;13(1):962. doi: 10.1038/s41598-023-27986-6.
3
A improved pooling method for convolutional neural networks.一种用于卷积神经网络的改进池化方法。
Sci Rep. 2024 Jan 18;14(1):1589. doi: 10.1038/s41598-024-51258-6.
4
Efficient shallow learning as an alternative to deep learning.高效浅层学习作为深度学习的替代方法。
Sci Rep. 2023 Apr 20;13(1):5423. doi: 10.1038/s41598-023-32559-8.
5
Towards dropout training for convolutional neural networks.面向卷积神经网络的随机失活训练
Neural Netw. 2015 Nov;71:1-10. doi: 10.1016/j.neunet.2015.07.007. Epub 2015 Jul 29.
6
A novel concatenate feature fusion RCNN architecture for sEMG-based hand gesture recognition.基于表面肌电信号的手势识别新型串联特征融合 RCNN 架构。
PLoS One. 2022 Jan 20;17(1):e0262810. doi: 10.1371/journal.pone.0262810. eCollection 2022.
7
Enhancing classification accuracy of fNIRS-BCI using features acquired from vector-based phase analysis.利用基于向量的相位分析获取的特征提高 fNIRS-BCI 的分类精度。
J Neural Eng. 2020 Oct 15;17(5):056025. doi: 10.1088/1741-2552/abb417.
8
A Multimodel-Based Deep Learning Framework for Short Text Multiclass Classification with the Imbalanced and Extremely Small Data Set.基于多模型的深度学习框架,用于处理不平衡且超小规模数据集的短文本多分类问题。
Comput Intell Neurosci. 2022 Oct 6;2022:7183207. doi: 10.1155/2022/7183207. eCollection 2022.
9
Deep neural-kernel blocks.深度神经核块。
Neural Netw. 2019 Aug;116:46-55. doi: 10.1016/j.neunet.2019.03.011. Epub 2019 Mar 28.
10
Multiple Sclerosis Identification by 14-Layer Convolutional Neural Network With Batch Normalization, Dropout, and Stochastic Pooling.通过具有批量归一化、随机失活和随机池化的14层卷积神经网络识别多发性硬化症
Front Neurosci. 2018 Nov 8;12:818. doi: 10.3389/fnins.2018.00818. eCollection 2018.

引用本文的文献

1
Towards a universal mechanism for successful deep learning.迈向深度学习成功的通用机制。
Sci Rep. 2024 Mar 11;14(1):5881. doi: 10.1038/s41598-024-56609-x.

本文引用的文献

1
Efficient shallow learning as an alternative to deep learning.高效浅层学习作为深度学习的替代方法。
Sci Rep. 2023 Apr 20;13(1):5423. doi: 10.1038/s41598-023-32559-8.
2
Learning on tree architectures outperforms a convolutional feedforward network.基于树结构的学习优于卷积前馈网络。
Sci Rep. 2023 Jan 30;13(1):962. doi: 10.1038/s41598-023-27986-6.
3
Efficient dendritic learning as an alternative to synaptic plasticity hypothesis.高效的树突学习作为突触可塑性假说的替代方案。
Sci Rep. 2022 Apr 28;12(1):6571. doi: 10.1038/s41598-022-10466-8.
4
Long anisotropic absolute refractory periods with rapid rise times to reliable responsiveness.具有快速上升时间至可靠反应性的长各向异性绝对不应期。
Phys Rev E. 2022 Jan;105(1-1):014401. doi: 10.1103/PhysRevE.105.014401.
5
Brain experiments imply adaptation mechanisms which outperform common AI learning algorithms.大脑实验暗示了适应机制,这些机制胜过常见的人工智能学习算法。
Sci Rep. 2020 Apr 23;10(1):6923. doi: 10.1038/s41598-020-63755-5.
6
Biological learning curves outperform existing ones in artificial intelligence algorithms.生物学习曲线在人工智能算法中优于现有算法。
Sci Rep. 2019 Aug 9;9(1):11558. doi: 10.1038/s41598-019-48016-4.
7
Adaptive nodes enrich nonlinear cooperative learning beyond traditional adaptation by links.自适应节点通过链接丰富了超越传统自适应的非线性协作学习。
Sci Rep. 2018 Mar 23;8(1):5100. doi: 10.1038/s41598-018-23471-7.
8
Overcoming catastrophic forgetting in neural networks.克服神经网络中的灾难性遗忘。
Proc Natl Acad Sci U S A. 2017 Mar 28;114(13):3521-3526. doi: 10.1073/pnas.1611835114. Epub 2017 Mar 14.
9
Deep learning.深度学习。
Nature. 2015 May 28;521(7553):436-44. doi: 10.1038/nature14539.
10
Eigenvalues of covariance matrices: Application to neural-network learning.协方差矩阵的特征值:在神经网络学习中的应用。
Phys Rev Lett. 1991 May 6;66(18):2396-2399. doi: 10.1103/PhysRevLett.66.2396.