Complex Systems Group, School of Physics, University of Sydney, Sydney, NSW, Australia.
Optima Consortium for Decision Science, Melbourne, VIC, Australia.
PLoS One. 2018 Mar 16;13(3):e0192944. doi: 10.1371/journal.pone.0192944. eCollection 2018.
When standard optimization methods fail to find a satisfactory solution for a parameter fitting problem, a tempting recourse is to adjust parameters manually. While tedious, this approach can be surprisingly powerful in terms of achieving optimal or near-optimal solutions. This paper outlines an optimization algorithm, Adaptive Stochastic Descent (ASD), that has been designed to replicate the essential aspects of manual parameter fitting in an automated way. Specifically, ASD uses simple principles to form probabilistic assumptions about (a) which parameters have the greatest effect on the objective function, and (b) optimal step sizes for each parameter. We show that for a certain class of optimization problems (namely, those with a moderate to large number of scalar parameter dimensions, especially if some dimensions are more important than others), ASD is capable of minimizing the objective function with far fewer function evaluations than classic optimization methods, such as the Nelder-Mead nonlinear simplex, Levenberg-Marquardt gradient descent, simulated annealing, and genetic algorithms. As a case study, we show that ASD outperforms standard algorithms when used to determine how resources should be allocated in order to minimize new HIV infections in Swaziland.
当标准的优化方法无法为参数拟合问题找到满意的解决方案时,人们往往会倾向于手动调整参数。虽然这种方法很繁琐,但在实现最佳或接近最佳解决方案方面,它却有着惊人的效果。本文概述了一种优化算法——自适应随机下降(ASD),该算法旨在以自动化的方式复制手动参数拟合的基本方面。具体来说,ASD 使用简单的原则对(a)哪些参数对目标函数的影响最大,以及(b)每个参数的最佳步长,形成概率假设。我们表明,对于某些类别的优化问题(即,那些具有中等数量到大量标量参数维度的问题,特别是如果某些维度比其他维度更为重要),ASD 能够在比经典优化方法(如 Nelder-Mead 非线性单纯形法、Levenberg-Marquardt 梯度下降法、模拟退火法和遗传算法)更少的函数评估次数下最小化目标函数。作为一个案例研究,我们表明,当用于确定如何分配资源以尽量减少斯威士兰的新 HIV 感染时,ASD 优于标准算法。