• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

最大相关性最小冗余丢弃与信息核决定点过程。

Maximum Relevance Minimum Redundancy Dropout with Informative Kernel Determinantal Point Process.

机构信息

INESC TEC and Faculty of Engineering, University of Porto, 4200-465 Porto, Portugal.

Department of Computer Science, University of Tulsa, Tulsa, OK 74104, USA.

出版信息

Sensors (Basel). 2021 Mar 6;21(5):1846. doi: 10.3390/s21051846.

DOI:10.3390/s21051846
PMID:33800810
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7961777/
Abstract

In recent years, deep neural networks have shown significant progress in computer vision due to their large generalization capacity; however, the overfitting problem ubiquitously threatens the learning process of these highly nonlinear architectures. Dropout is a recent solution to mitigate overfitting that has witnessed significant success in various classification applications. Recently, many efforts have been made to improve the Standard dropout using an unsupervised merit-based semantic selection of neurons in the latent space. However, these studies do not consider the task-relevant information quality and quantity and the diversity of the latent kernels. To solve the challenge of dropping less informative neurons in deep learning, we propose an efficient end-to-end dropout algorithm that selects the most informative neurons with the highest correlation with the target output considering the sparsity in its selection procedure. First, to promote activation diversity, we devise an approach to select the most diverse set of neurons by making use of determinantal point process (DPP) sampling. Furthermore, to incorporate task specificity into deep latent features, a mutual information (MI)-based merit function is developed. Leveraging the proposed MI with DPP sampling, we introduce the novel DPPMI dropout that adaptively adjusts the retention rate of neurons based on their contribution to the neural network task. Empirical studies on real-world classification benchmarks including, MNIST, SVHN, CIFAR10, CIFAR100, demonstrate the superiority of our proposed method over recent state-of-the-art dropout algorithms in the literature.

摘要

近年来,由于深度神经网络具有较大的泛化能力,因此在计算机视觉领域取得了显著的进展;然而,过拟合问题普遍威胁着这些高度非线性架构的学习过程。随机失活是一种减轻过拟合的最新方法,在各种分类应用中取得了显著的成功。最近,许多研究都致力于使用无监督的基于优点的语义选择方法,在潜在空间中选择神经元来改进标准随机失活。然而,这些研究并没有考虑到任务相关的信息质量和数量以及潜在核的多样性。为了解决深度学习中丢失信息量较少的神经元的问题,我们提出了一种高效的端到端随机失活算法,该算法考虑到选择过程中的稀疏性,选择与目标输出相关性最高的最具信息量的神经元。首先,为了促进激活多样性,我们设计了一种通过使用决定点过程 (DPP) 采样选择最具多样性的神经元集的方法。此外,为了将任务特异性纳入深度潜在特征中,我们开发了一种基于互信息 (MI) 的优点函数。利用我们提出的基于 MI 的 DPP 采样,我们引入了新的 DPPMI 随机失活,该方法根据神经元对神经网络任务的贡献自适应调整神经元的保留率。在包括 MNIST、SVHN、CIFAR10、CIFAR100 在内的真实分类基准上的实证研究表明,与文献中的最新先进的随机失活算法相比,我们提出的方法具有优越性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d7cb/7961777/69d37a6f35ba/sensors-21-01846-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d7cb/7961777/21ebb36f4758/sensors-21-01846-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d7cb/7961777/8af5bc30f438/sensors-21-01846-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d7cb/7961777/6fcc44843e71/sensors-21-01846-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d7cb/7961777/cc9caa7c63e6/sensors-21-01846-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d7cb/7961777/39db30152ea7/sensors-21-01846-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d7cb/7961777/10d88b5b6447/sensors-21-01846-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d7cb/7961777/11d4540791c5/sensors-21-01846-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d7cb/7961777/726fd76ef898/sensors-21-01846-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d7cb/7961777/69d37a6f35ba/sensors-21-01846-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d7cb/7961777/21ebb36f4758/sensors-21-01846-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d7cb/7961777/8af5bc30f438/sensors-21-01846-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d7cb/7961777/6fcc44843e71/sensors-21-01846-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d7cb/7961777/cc9caa7c63e6/sensors-21-01846-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d7cb/7961777/39db30152ea7/sensors-21-01846-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d7cb/7961777/10d88b5b6447/sensors-21-01846-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d7cb/7961777/11d4540791c5/sensors-21-01846-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d7cb/7961777/726fd76ef898/sensors-21-01846-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d7cb/7961777/69d37a6f35ba/sensors-21-01846-g009.jpg

相似文献

1
Maximum Relevance Minimum Redundancy Dropout with Informative Kernel Determinantal Point Process.最大相关性最小冗余丢弃与信息核决定点过程。
Sensors (Basel). 2021 Mar 6;21(5):1846. doi: 10.3390/s21051846.
2
Forward propagation dropout in deep neural networks using Jensen-Shannon and random forest feature importance ranking.基于 Jensen-Shannon 和随机森林特征重要性排序的深度神经网络前向传播随机失活。
Neural Netw. 2023 Aug;165:238-247. doi: 10.1016/j.neunet.2023.05.044. Epub 2023 May 29.
3
Hybridized sine cosine algorithm with convolutional neural networks dropout regularization application.混合正弦余弦算法与卷积神经网络辍学正则化应用。
Sci Rep. 2022 Apr 15;12(1):6302. doi: 10.1038/s41598-022-09744-2.
4
Regularization of deep neural networks with spectral dropout.带谱随机失活的深度神经网络正则化。
Neural Netw. 2019 Feb;110:82-90. doi: 10.1016/j.neunet.2018.09.009. Epub 2018 Oct 16.
5
Advanced Dropout: A Model-Free Methodology for Bayesian Dropout Optimization.高级辍学:一种无模型的贝叶斯辍学优化方法。
IEEE Trans Pattern Anal Mach Intell. 2022 Sep;44(9):4605-4625. doi: 10.1109/TPAMI.2021.3083089. Epub 2022 Aug 4.
6
Theory of adaptive SVD regularization for deep neural networks.自适应 SVD 正则化的深度神经网络理论。
Neural Netw. 2020 Aug;128:33-46. doi: 10.1016/j.neunet.2020.04.021. Epub 2020 Apr 25.
7
Shakeout: A New Approach to Regularized Deep Neural Network Training.Shakeout:一种正则化深度神经网络训练的新方法。
IEEE Trans Pattern Anal Mach Intell. 2018 May;40(5):1245-1258. doi: 10.1109/TPAMI.2017.2701831. Epub 2017 May 5.
8
Adaptive Dropout Method Based on Biological Principles.基于生物学原理的自适应随机失活方法。
IEEE Trans Neural Netw Learn Syst. 2021 Sep;32(9):4267-4276. doi: 10.1109/TNNLS.2021.3070895. Epub 2021 Aug 31.
9
Determinantal point process attention over grid cell code supports out of distribution generalization.基于行列式点过程注意力机制的网格细胞编码支持分布外泛化。
Elife. 2024 Aug 1;12:RP89911. doi: 10.7554/eLife.89911.
10
Optimizing Kernel Machines Using Deep Learning.利用深度学习优化核机器
IEEE Trans Neural Netw Learn Syst. 2018 Nov;29(11):5528-5540. doi: 10.1109/TNNLS.2018.2804895. Epub 2018 Mar 6.

本文引用的文献

1
Advanced Dropout: A Model-Free Methodology for Bayesian Dropout Optimization.高级辍学:一种无模型的贝叶斯辍学优化方法。
IEEE Trans Pattern Anal Mach Intell. 2022 Sep;44(9):4605-4625. doi: 10.1109/TPAMI.2021.3083089. Epub 2022 Aug 4.
2
Energy Disaggregation via Deep Temporal Dictionary Learning.通过深度时间字典学习进行能源解聚
IEEE Trans Neural Netw Learn Syst. 2020 May;31(5):1696-1709. doi: 10.1109/TNNLS.2019.2921952. Epub 2019 Jul 10.
3
Information Dropout: Learning Optimal Representations Through Noisy Computation.
信息丢失:通过噪声计算学习最优表示
IEEE Trans Pattern Anal Mach Intell. 2018 Dec;40(12):2897-2905. doi: 10.1109/TPAMI.2017.2784440. Epub 2018 Jan 10.
4
Correlational Neural Networks.相关神经网络。
Neural Comput. 2016 Feb;28(2):257-85. doi: 10.1162/NECO_a_00801. Epub 2015 Dec 14.