Huang Hsin-Yuan, Kueng Richard, Preskill John
Institute for Quantum Information and Matter, Caltech, Pasadena, California 91125, USA.
Department of Computing and Mathematical Sciences, Caltech, Pasadena, California 91125, USA.
Phys Rev Lett. 2021 May 14;126(19):190505. doi: 10.1103/PhysRevLett.126.190505.
We study the performance of classical and quantum machine learning (ML) models in predicting outcomes of physical experiments. The experiments depend on an input parameter x and involve execution of a (possibly unknown) quantum process E. Our figure of merit is the number of runs of E required to achieve a desired prediction performance. We consider classical ML models that perform a measurement and record the classical outcome after each run of E, and quantum ML models that can access E coherently to acquire quantum data; the classical or quantum data are then used to predict the outcomes of future experiments. We prove that for any input distribution D(x), a classical ML model can provide accurate predictions on average by accessing E a number of times comparable to the optimal quantum ML model. In contrast, for achieving an accurate prediction on all inputs, we prove that the exponential quantum advantage is possible. For example, to predict the expectations of all Pauli observables in an n-qubit system ρ, classical ML models require 2^{Ω(n)} copies of ρ, but we present a quantum ML model using only O(n) copies. Our results clarify where the quantum advantage is possible and highlight the potential for classical ML models to address challenging quantum problems in physics and chemistry.
我们研究经典和量子机器学习(ML)模型在预测物理实验结果方面的性能。这些实验取决于输入参数x,并涉及执行一个(可能未知的)量子过程E。我们的品质因数是实现所需预测性能所需的E的运行次数。我们考虑在每次E运行后进行测量并记录经典结果的经典ML模型,以及可以连贯访问E以获取量子数据的量子ML模型;然后使用经典或量子数据来预测未来实验的结果。我们证明,对于任何输入分布D(x),经典ML模型通过访问E的次数与最优量子ML模型相当,平均而言可以提供准确的预测。相比之下,为了对所有输入实现准确预测,我们证明指数级量子优势是可能的。例如,要预测n量子比特系统ρ中所有泡利可观测量的期望值,经典ML模型需要2^{Ω(n)}个ρ的副本,但我们提出了一个仅使用O(n)个副本的量子ML模型。我们的结果阐明了量子优势可能存在的地方,并突出了经典ML模型解决物理和化学中具有挑战性的量子问题的潜力。