• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

用于分类的深度神经网络的快速收敛率。

Fast convergence rates of deep neural networks for classification.

作者信息

Kim Yongdai, Ohn Ilsang, Kim Dongha

机构信息

Department of Statistics and Department of Data Science, Seoul National University, Seoul 08826, Republic of Korea.

Department of Applied and Computational Mathematics and Statistics, The University of Notre Dame, Indiana 46530, USA.

出版信息

Neural Netw. 2021 Jun;138:179-197. doi: 10.1016/j.neunet.2021.02.012. Epub 2021 Feb 23.

DOI:10.1016/j.neunet.2021.02.012
PMID:33676328
Abstract

We derive the fast convergence rates of a deep neural network (DNN) classifier with the rectified linear unit (ReLU) activation function learned using the hinge loss. We consider three cases for a true model: (1) a smooth decision boundary, (2) smooth conditional class probability, and (3) the margin condition (i.e., the probability of inputs near the decision boundary is small). We show that the DNN classifier learned using the hinge loss achieves fast rate convergences for all three cases provided that the architecture (i.e., the number of layers, number of nodes and sparsity) is carefully selected. An important implication is that DNN architectures are very flexible for use in various cases without much modification. In addition, we consider a DNN classifier learned by minimizing the cross-entropy, and show that the DNN classifier achieves a fast convergence rate under the conditions that the noise exponent and margin exponent are large. Even though they are strong, we explain that these two conditions are not too absurd for image classification problems. To confirm our theoretical explanation, we present the results of a small numerical study conducted to compare the hinge loss and cross-entropy.

摘要

我们推导了使用铰链损失学习的具有整流线性单元(ReLU)激活函数的深度神经网络(DNN)分类器的快速收敛速率。对于真实模型,我们考虑三种情况:(1)平滑决策边界,(2)平滑条件类概率,以及(3)边缘条件(即决策边界附近输入的概率较小)。我们表明,只要精心选择架构(即层数、节点数和稀疏性),使用铰链损失学习的DNN分类器在所有这三种情况下都能实现快速收敛速率。一个重要的含义是,DNN架构在各种情况下使用时非常灵活,无需太多修改。此外,我们考虑通过最小化交叉熵学习的DNN分类器,并表明在噪声指数和边缘指数较大的条件下,DNN分类器实现了快速收敛速率。尽管这些条件很强,但我们解释说,对于图像分类问题,这两个条件并非过于荒谬。为了证实我们的理论解释,我们展示了一项小型数值研究的结果,该研究旨在比较铰链损失和交叉熵。

相似文献

1
Fast convergence rates of deep neural networks for classification.用于分类的深度神经网络的快速收敛率。
Neural Netw. 2021 Jun;138:179-197. doi: 10.1016/j.neunet.2021.02.012. Epub 2021 Feb 23.
2
When Not to Classify: Anomaly Detection of Attacks (ADA) on DNN Classifiers at Test Time.在测试时不对分类进行分类:DNN 分类器的攻击异常检测(ADA)。
Neural Comput. 2019 Aug;31(8):1624-1670. doi: 10.1162/neco_a_01209. Epub 2019 Jul 1.
3
Improving robustness of a deep learning-based lung-nodule classification model of CT images with respect to image noise.提高基于深度学习的 CT 图像肺结节分类模型对图像噪声鲁棒性。
Phys Med Biol. 2021 Dec 7;66(24). doi: 10.1088/1361-6560/ac3d16.
4
A novel end-to-end classifier using domain transferred deep convolutional neural networks for biomedical images.一种使用域转移深度卷积神经网络的新型端到端生物医学图像分类器。
Comput Methods Programs Biomed. 2017 Mar;140:283-293. doi: 10.1016/j.cmpb.2016.12.019. Epub 2017 Jan 6.
5
Using deep learning to associate human genes with age-related diseases.利用深度学习将人类基因与年龄相关疾病联系起来。
Bioinformatics. 2020 Apr 1;36(7):2202-2208. doi: 10.1093/bioinformatics/btz887.
6
Distorted image classification using neural activation pattern matching loss.基于神经激活模式匹配损失的失真图像分类。
Neural Netw. 2023 Oct;167:50-64. doi: 10.1016/j.neunet.2023.07.050. Epub 2023 Aug 9.
7
Nonconvex Sparse Regularization for Deep Neural Networks and Its Optimality.非凸稀疏正则化在深度神经网络中的应用及其最优性。
Neural Comput. 2022 Jan 14;34(2):476-517. doi: 10.1162/neco_a_01457.
8
Evaluating the Visualization of What a Deep Neural Network Has Learned.评估深度神经网络所学内容的可视化效果。
IEEE Trans Neural Netw Learn Syst. 2017 Nov;28(11):2660-2673. doi: 10.1109/TNNLS.2016.2599820.
9
P-DIFF+: Improving learning classifier with noisy labels by Noisy Negative Learning loss.P-DIFF+:通过噪声负样本学习损失提高有噪声标签学习分类器的性能。
Neural Netw. 2021 Dec;144:1-10. doi: 10.1016/j.neunet.2021.07.024. Epub 2021 Aug 2.
10
Deep learning-based detection and classification of geographic atrophy using a deep convolutional neural network classifier.使用深度卷积神经网络分类器基于深度学习的地理性萎缩检测与分类
Graefes Arch Clin Exp Ophthalmol. 2018 Nov;256(11):2053-2060. doi: 10.1007/s00417-018-4098-2. Epub 2018 Aug 8.

引用本文的文献

1
Enhanced framework for COVID-19 prediction with computed tomography scan images using dense convolutional neural network and novel loss function.使用密集卷积神经网络和新型损失函数,基于计算机断层扫描图像的新冠肺炎预测增强框架。
Comput Electr Eng. 2023 Jan;105:108479. doi: 10.1016/j.compeleceng.2022.108479. Epub 2022 Nov 14.
2
Smooth Function Approximation by Deep Neural Networks with General Activation Functions.具有通用激活函数的深度神经网络对光滑函数的逼近
Entropy (Basel). 2019 Jun 26;21(7):627. doi: 10.3390/e21070627.