Dai Chenguang, Liu Jun S
Department of Statistics, Harvard University, Cambridge, Massachusetts, USA.
Phys Rev E. 2020 Mar;101(3-1):033301. doi: 10.1103/PhysRevE.101.033301.
We show that the Wang-Landau algorithm can be formulated as a stochastic gradient descent algorithm minimizing a smooth and convex objective function, of which the gradient is estimated using Markov chain Monte Carlo iterations. The optimization formulation provides us another way to establish the convergence rate of the Wang-Landau algorithm, by exploiting the fact that almost surely the density estimates (on the logarithmic scale) remain in a compact set, upon which the objective function is strongly convex. The optimization viewpoint motivates us to improve the efficiency of the Wang-Landau algorithm using popular tools including the momentum method and the adaptive learning rate method. We demonstrate the accelerated Wang-Landau algorithm on a two-dimensional Ising model and a two-dimensional ten-state Potts model.
我们证明,王-兰道算法可以被表述为一种随机梯度下降算法,用于最小化一个光滑且凸的目标函数,其梯度通过马尔可夫链蒙特卡罗迭代来估计。这种优化表述为我们提供了另一种途径来确定王-兰道算法的收敛速度,这是通过利用这样一个事实:几乎必然地,(在对数尺度上的)密度估计值会保持在一个紧致集内,而目标函数在该紧致集上是强凸的。从优化的角度出发,促使我们使用包括动量法和自适应学习率法等常用工具来提高王-兰道算法的效率。我们在二维伊辛模型和二维十态Potts模型上展示了加速的王-兰道算法。