Suppr超能文献

用于改善神经网络泛化能力的计算机辅助优化设计

Computer-aided optimal designs for improving neural network generalization.

作者信息

Issanchou Sébastien, Gauchi Jean-Pierre

机构信息

RHODIA SA, Bordeaux, France.

出版信息

Neural Netw. 2008 Sep;21(7):945-50. doi: 10.1016/j.neunet.2008.05.012. Epub 2008 Jun 13.

Abstract

In this article we propose a new insight into the field of feed-forward neural network modeling. We considered the framework of the nonlinear regression models to construct computer-aided D-optimal designs for this class of neural models. These designs can be seen as a particular case of active learning. Classical algorithms are used to construct local approximate and local exact D-optimal designs. We observed that the so-called generalization of a neural network (the equivalent term, "predictive ability", is more familiar to statisticians) is improved when the D-efficiency of the chosen "learning set design" increases. We thus showed that the D-efficiency criterion can be the basis for a better strategy for the neural network learning phase than the standard uniform random strategy encountered in this field. Our proposition is based on two possible strategies: a One-Step Strategy or a Full Sequential Strategy. Intensive Monte Carlo simulations with an academic example show that the D-optimal "learning set design" strategies proposed lead to a substantial improvement in the use of neural network models.

摘要

在本文中,我们提出了一种对前馈神经网络建模领域的新见解。我们考虑了非线性回归模型的框架,为这类神经模型构建计算机辅助的D - 最优设计。这些设计可被视为主动学习的一种特殊情况。使用经典算法来构建局部近似和局部精确的D - 最优设计。我们观察到,当所选“学习集设计”的D - 效率提高时,神经网络的所谓泛化能力(统计学家更熟悉的等效术语是“预测能力”)会得到改善。因此,我们表明,与该领域中常见的标准均匀随机策略相比,D - 效率准则可以作为神经网络学习阶段更好策略的基础。我们的提议基于两种可能的策略:单步策略或全序贯策略。通过一个学术示例进行的密集蒙特卡罗模拟表明,所提出的D - 最优“学习集设计”策略显著改善了神经网络模型的使用。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验