Suppr超能文献

浅析浅层网络和深层网络中的训练误差与泛化误差。

An analysis of training and generalization errors in shallow and deep networks.

机构信息

Institute of Mathematical Sciences, Claremont Graduate University, Claremont, CA 91711, United States of America.

Center for Brains, Minds, and Machines, McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, 02139, United States of America.

出版信息

Neural Netw. 2020 Jan;121:229-241. doi: 10.1016/j.neunet.2019.08.028. Epub 2019 Sep 7.

Abstract

This paper is motivated by an open problem around deep networks, namely, the apparent absence of over-fitting despite large over-parametrization which allows perfect fitting of the training data. In this paper, we analyze this phenomenon in the case of regression problems when each unit evaluates a periodic activation function. We argue that the minimal expected value of the square loss is inappropriate to measure the generalization error in approximation of compositional functions in order to take full advantage of the compositional structure. Instead, we measure the generalization error in the sense of maximum loss, and sometimes, as a pointwise error. We give estimates on exactly how many parameters ensure both zero training error as well as a good generalization error. We prove that a solution of a regularization problem is guaranteed to yield a good training error as well as a good generalization error and estimate how much error to expect at which test data.

摘要

本文针对深度网络中的一个开放性问题展开研究,即尽管深度网络的参数过多,但却不存在过拟合现象,因为其可以完美拟合训练数据。在本文中,我们在回归问题的情况下分析了这一现象,其中每个单元评估一个周期性激活函数。我们认为,最小化平方损失值并不适合衡量组合函数逼近中的泛化误差,因为这样无法充分利用组合结构。相反,我们采用最大损失意义下的泛化误差进行度量,有时也采用逐点误差进行度量。我们给出了具体的参数数量估计,以确保同时实现零训练误差和良好的泛化误差。我们证明了正则化问题的解可以保证实现良好的训练误差和良好的泛化误差,并估计在哪些测试数据上会产生多少误差。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验