Sorbonne Université, CNRS, LIP6, Paris, France
Evol Comput. 2021 Dec 1;29(4):521-541. doi: 10.1162/evco_a_00290.
It seems very intuitive that for the maximization of the OneMax problem Om(x):=∑i=1nxi the best that an elitist unary unbiased search algorithm can do is to store a best so far solution, and to modify it with the operator that yields the best possible expected progress in function value. This assumption has been implicitly used in several empirical works. In Doerr et al. (2020), it was formally proven that this approach is indeed almost optimal. In this work, we prove that drift maximization is not optimal. More precisely, we show that for most fitness levels between n/2 and 2n/3 the optimal mutation strengths are larger than the drift-maximizing ones. This implies that the optimal RLS is more risk-affine than the variant maximizing the stepwise expected progress. We show similar results for the mutation rates of the classic (1+1) Evolutionary Algorithm (EA) and its resampling variant, the (1+1) EA>0. As a result of independent interest we show that the optimal mutation strengths, unlike the drift-maximizing ones, can be even.
对于 OneMax 问题的最大化,即 Om(x):=∑i=1nxi,直观地认为,最好的精英无偏搜索算法是存储迄今为止的最佳解决方案,并使用产生最佳预期函数值进展的算子对其进行修改。这一假设已在一些经验研究中被隐式使用。在 Doerr 等人(2020 年)中,正式证明了这种方法确实几乎是最优的。在这项工作中,我们证明了漂移最大化并不是最优的。更准确地说,我们表明,对于大多数在 n/2 和 2n/3 之间的适应度水平,最优的突变强度大于漂移最大化的突变强度。这意味着最优的 RLS 比最大化逐步预期进展的变体更具有风险敏感性。我们还展示了经典(1+1)进化算法及其重采样变体(1+1)EA>0 的突变率的类似结果。作为独立研究的结果,我们表明,与漂移最大化的突变强度不同,最优的突变强度甚至可以是偶数。