Department of Statistics, Universidade Federal de Minas Gerais, Belo Horizonte, MG 31270-901, Brazil.
Neural Netw. 2012 Sep;33:21-31. doi: 10.1016/j.neunet.2012.04.006. Epub 2012 Apr 23.
The Pareto-optimality concept is used in this paper in order to represent a constrained set of solutions that are able to trade-off the two main objective functions involved in neural networks supervised learning: data-set error and network complexity. The neural network is described as a dynamic system having error and complexity as its state variables and learning is presented as a process of controlling a learning trajectory in the resulting state space. In order to control the trajectories, sliding mode dynamics is imposed to the network. It is shown that arbitrary learning trajectories can be achieved by maintaining the sliding mode gains within their convergence intervals. Formal proofs of convergence conditions are therefore presented. The concept of trajectory learning presented in this paper goes further beyond the selection of a final state in the Pareto set, since it can be reached through different trajectories and states in the trajectory can be assessed individually against an additional objective function.
本文使用帕累托最优概念来表示一组受约束的解决方案,这些解决方案能够权衡神经网络监督学习中涉及的两个主要目标函数:数据集误差和网络复杂性。神经网络被描述为一个具有误差和复杂性作为状态变量的动态系统,学习被表示为控制状态空间中学习轨迹的过程。为了控制轨迹,滑动模式动力学被施加到网络上。结果表明,通过将滑动模式增益保持在其收敛区间内,可以实现任意学习轨迹。因此,给出了收敛条件的正式证明。本文提出的轨迹学习概念超越了在帕累托集中选择最终状态,因为它可以通过不同的轨迹来实现,并且可以根据附加的目标函数单独评估轨迹中的状态。