Caro Matthias C, Huang Hsin-Yuan, Cerezo M, Sharma Kunal, Sornborger Andrew, Cincio Lukasz, Coles Patrick J
Department of Mathematics, Technical University of Munich, Garching, Germany.
Munich Center for Quantum Science and Technology (MCQST), Munich, Germany.
Nat Commun. 2022 Aug 22;13(1):4919. doi: 10.1038/s41467-022-32550-3.
Modern quantum machine learning (QML) methods involve variationally optimizing a parameterized quantum circuit on a training data set, and subsequently making predictions on a testing data set (i.e., generalizing). In this work, we provide a comprehensive study of generalization performance in QML after training on a limited number N of training data points. We show that the generalization error of a quantum machine learning model with T trainable gates scales at worst as [Formula: see text]. When only K ≪ T gates have undergone substantial change in the optimization process, we prove that the generalization error improves to [Formula: see text]. Our results imply that the compiling of unitaries into a polynomial number of native gates, a crucial application for the quantum computing industry that typically uses exponential-size training data, can be sped up significantly. We also show that classification of quantum states across a phase transition with a quantum convolutional neural network requires only a very small training data set. Other potential applications include learning quantum error correcting codes or quantum dynamical simulation. Our work injects new hope into the field of QML, as good generalization is guaranteed from few training data.
现代量子机器学习(QML)方法包括在训练数据集上对参数化量子电路进行变分优化,随后在测试数据集上进行预测(即泛化)。在这项工作中,我们对在有限数量(N)的训练数据点上训练后的QML泛化性能进行了全面研究。我们表明,具有(T)个可训练门的量子机器学习模型的泛化误差在最坏情况下的缩放比例为[公式:见原文]。当在优化过程中只有(K\ll T)个门发生了显著变化时,我们证明泛化误差改善为[公式:见原文]。我们的结果意味着,将酉矩阵编译为多项式数量的原生门(这是量子计算行业的一项关键应用,该行业通常使用指数规模的训练数据)可以显著加速。我们还表明,使用量子卷积神经网络对跨越相变的量子态进行分类仅需要非常小的训练数据集。其他潜在应用包括学习量子纠错码或量子动力学模拟。我们的工作为QML领域注入了新的希望,因为从少量训练数据就能保证良好的泛化性能。