Du Yuxuan, Hsieh Min-Hsiu, Tao Dacheng
College of Computing and Data Science, Nanyang Technological University, Singapore, Singapore.
Hon Hai (Foxconn) Research Institute, Taipei, Taiwan.
Nat Commun. 2025 Apr 22;16(1):3790. doi: 10.1038/s41467-025-59198-z.
The vast and complicated many-qubit state space forbids us to comprehensively capture the dynamics of modern quantum computers via classical simulations or quantum tomography. Recent progress in quantum learning theory prompts a crucial question: can linear properties of a many-qubit circuit with d tunable RZ gates and G - d Clifford gates be efficiently learned from measurement data generated by varying classical inputs? In this work, we prove that the sample complexity scaling linearly in d is required to achieve a small prediction error, while the corresponding computational complexity may scale exponentially in d. To address this challenge, we propose a kernel-based method leveraging classical shadows and truncated trigonometric expansions, enabling a controllable trade-off between prediction accuracy and computational overhead. Our results advance two crucial realms in quantum computation: the exploration of quantum algorithms with practical utilities and learning-based quantum system certification. We conduct numerical simulations to validate our proposals across diverse scenarios, encompassing quantum information processing protocols, Hamiltonian simulation, and variational quantum algorithms up to 60 qubits.
庞大而复杂的多量子比特态空间使我们无法通过经典模拟或量子层析成像全面捕捉现代量子计算机的动力学。量子学习理论的最新进展引发了一个关键问题:能否从通过改变经典输入生成的测量数据中有效地学习具有d个可调谐RZ门和G - d个克利福德门的多量子比特电路的线性特性?在这项工作中,我们证明,为了实现小的预测误差,样本复杂度需要与d成线性比例缩放,而相应的计算复杂度可能与d成指数比例缩放。为了应对这一挑战,我们提出了一种基于核的方法,该方法利用经典影子和截断三角展开,能够在预测精度和计算开销之间进行可控的权衡。我们的结果推动了量子计算的两个关键领域:具有实际效用的量子算法的探索和基于学习的量子系统认证。我们进行了数值模拟,以在各种场景下验证我们的提议,包括量子信息处理协议、哈密顿量模拟以及多达60个量子比特的变分量子算法。