Suppr超能文献

多目标神经网络学习中滑模轨迹的收敛性分析。

Convergence analysis of sliding mode trajectories in multi-objective neural networks learning.

机构信息

Department of Statistics, Universidade Federal de Minas Gerais, Belo Horizonte, MG 31270-901, Brazil.

出版信息

Neural Netw. 2012 Sep;33:21-31. doi: 10.1016/j.neunet.2012.04.006. Epub 2012 Apr 23.

Abstract

The Pareto-optimality concept is used in this paper in order to represent a constrained set of solutions that are able to trade-off the two main objective functions involved in neural networks supervised learning: data-set error and network complexity. The neural network is described as a dynamic system having error and complexity as its state variables and learning is presented as a process of controlling a learning trajectory in the resulting state space. In order to control the trajectories, sliding mode dynamics is imposed to the network. It is shown that arbitrary learning trajectories can be achieved by maintaining the sliding mode gains within their convergence intervals. Formal proofs of convergence conditions are therefore presented. The concept of trajectory learning presented in this paper goes further beyond the selection of a final state in the Pareto set, since it can be reached through different trajectories and states in the trajectory can be assessed individually against an additional objective function.

摘要

本文使用帕累托最优概念来表示一组受约束的解决方案,这些解决方案能够权衡神经网络监督学习中涉及的两个主要目标函数:数据集误差和网络复杂性。神经网络被描述为一个具有误差和复杂性作为状态变量的动态系统,学习被表示为控制状态空间中学习轨迹的过程。为了控制轨迹,滑动模式动力学被施加到网络上。结果表明,通过将滑动模式增益保持在其收敛区间内,可以实现任意学习轨迹。因此,给出了收敛条件的正式证明。本文提出的轨迹学习概念超越了在帕累托集中选择最终状态,因为它可以通过不同的轨迹来实现,并且可以根据附加的目标函数单独评估轨迹中的状态。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验