Huggins William J, Wan Kianna, McClean Jarrod, O'Brien Thomas E, Wiebe Nathan, Babbush Ryan
Google Quantum AI, Mountain View, 94043 California, USA.
Stanford Institute for Theoretical Physics, Stanford University, Stanford, California 94305, USA.
Phys Rev Lett. 2022 Dec 9;129(24):240501. doi: 10.1103/PhysRevLett.129.240501.
Many quantum algorithms involve the evaluation of expectation values. Optimal strategies for estimating a single expectation value are known, requiring a number of state preparations that scales with the target error ϵ as O(1/ϵ). In this Letter, we address the task of estimating the expectation values of M different observables, each to within additive error ϵ, with the same 1/ϵ dependence. We describe an approach that leverages Gilyén et al.'s quantum gradient estimation algorithm to achieve O(sqrt[M]/ϵ) scaling up to logarithmic factors, regardless of the commutation properties of the M observables. We prove that this scaling is worst-case optimal in the high-precision regime if the state preparation is treated as a black box, even when the operators are mutually commuting. We highlight the flexibility of our approach by presenting several generalizations, including a strategy for accelerating the estimation of a collection of dynamic correlation functions.
许多量子算法都涉及期望值的评估。已知用于估计单个期望值的最优策略,所需的状态制备数量与目标误差ϵ成O(1/ϵ)比例缩放。在本信函中,我们解决了估计M个不同可观测量的期望值的任务,每个期望值的加法误差为ϵ,且具有相同的1/ϵ依赖性。我们描述了一种方法,该方法利用吉连等人的量子梯度估计算法,实现高达对数因子的O(√M/ϵ)比例缩放,而与M个可观测量的对易性质无关。我们证明,如果将状态制备视为黑箱,即使算符相互对易,这种比例缩放在高精度 regime 中也是最坏情况最优的。我们通过提出几种推广来突出我们方法的灵活性,包括一种加速动态关联函数集合估计的策略。