Harrow Aram W, Napp John C
Center for Theoretical Physics, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, USA.
Phys Rev Lett. 2021 Apr 9;126(14):140502. doi: 10.1103/PhysRevLett.126.140502.
Within a natural black-box setting, we exhibit a simple optimization problem for which a quantum variational algorithm that measures analytic gradients of the objective function with a low-depth circuit and performs stochastic gradient descent provably converges to an optimum faster than any algorithm that only measures the objective function itself, settling the question of whether measuring analytic gradients in such algorithms can ever be beneficial. We also derive upper bounds on the cost of gradient-based variational optimization near a local minimum.
在自然的黑箱设置中,我们展示了一个简单的优化问题,对于该问题,一种量子变分算法通过低深度电路测量目标函数的解析梯度并执行随机梯度下降,可证明其收敛到最优解的速度比任何仅测量目标函数本身的算法都要快,从而解决了在这类算法中测量解析梯度是否有益的问题。我们还推导了局部最小值附近基于梯度的变分优化成本的上界。