Gallagher Marcus, Frean Marcus
School of Information Technology and Electrical Engineering, University of Queensland, Brisbane, QLD 4072, Australia.
Evol Comput. 2005 Spring;13(1):29-42. doi: 10.1162/1063656053583478.
Evolutionary algorithms perform optimization using a population of sample solution points. An interesting development has been to view population-based optimization as the process of evolving an explicit, probabilistic model of the search space. This paper investigates a formal basis for continuous, population-based optimization in terms of a stochastic gradient descent on the Kullback-Leibler divergence between the model probability density and the objective function, represented as an unknown density of assumed form. This leads to an update rule that is related and compared with previous theoretical work, a continuous version of the population-based incremental learning algorithm, and the generalized mean shift clustering framework. Experimental results are presented that demonstrate the dynamics of the new algorithm on a set of simple test problems.
进化算法通过一组样本解点来执行优化。一个有趣的进展是将基于种群的优化视为对搜索空间演化出一个显式概率模型的过程。本文从模型概率密度与目标函数(表示为假定形式的未知密度)之间的库尔贝克-莱布勒散度的随机梯度下降的角度,研究了连续的、基于种群的优化的形式基础。这引出了一个更新规则,该规则与先前的理论工作相关并进行了比较,是基于种群的增量学习算法的连续版本,以及广义均值漂移聚类框架。给出的实验结果展示了新算法在一组简单测试问题上的动态特性。