Suppr超能文献

量子核方法中的指数浓度。

Exponential concentration in quantum kernel methods.

作者信息

Thanasilp Supanut, Wang Samson, Cerezo M, Holmes Zoë

机构信息

Centre for Quantum Technologies, National University of Singapore, 3 Science Drive 2, Singapore.

Institute of Physics, Ecole Polytechnique Fédérale de Lausanne (EPFL), Lausanne, Switzerland.

出版信息

Nat Commun. 2024 Jun 18;15(1):5200. doi: 10.1038/s41467-024-49287-w.

Abstract

Kernel methods in Quantum Machine Learning (QML) have recently gained significant attention as a potential candidate for achieving a quantum advantage in data analysis. Among other attractive properties, when training a kernel-based model one is guaranteed to find the optimal model's parameters due to the convexity of the training landscape. However, this is based on the assumption that the quantum kernel can be efficiently obtained from quantum hardware. In this work we study the performance of quantum kernel models from the perspective of the resources needed to accurately estimate kernel values. We show that, under certain conditions, values of quantum kernels over different input data can be exponentially concentrated (in the number of qubits) towards some fixed value. Thus on training with a polynomial number of measurements, one ends up with a trivial model where the predictions on unseen inputs are independent of the input data. We identify four sources that can lead to concentration including expressivity of data embedding, global measurements, entanglement and noise. For each source, an associated concentration bound of quantum kernels is analytically derived. Lastly, we show that when dealing with classical data, training a parametrized data embedding with a kernel alignment method is also susceptible to exponential concentration. Our results are verified through numerical simulations for several QML tasks. Altogether, we provide guidelines indicating that certain features should be avoided to ensure the efficient evaluation of quantum kernels and so the performance of quantum kernel methods.

摘要

量子机器学习(QML)中的核方法最近作为在数据分析中实现量子优势的潜在候选方法而备受关注。在其他吸引人的特性中,在训练基于核的模型时,由于训练态势的凸性,人们能够保证找到最优模型的参数。然而,这是基于量子核可以从量子硬件高效获得的假设。在这项工作中,我们从准确估计核值所需资源的角度研究量子核模型的性能。我们表明,在某些条件下,不同输入数据上的量子核值可以(在量子比特数上)指数级地集中到某个固定值。因此,在用多项式数量的测量进行训练时,最终会得到一个平凡模型,其中对未见过的输入的预测与输入数据无关。我们确定了四个可能导致集中的来源,包括数据嵌入的表现力、全局测量、纠缠和噪声。对于每个来源,都通过解析推导得出了量子核的相关集中界限。最后,我们表明,在处理经典数据时,用核对齐方法训练参数化数据嵌入也容易出现指数级集中。我们的结果通过针对几个QML任务的数值模拟得到了验证。总之,我们提供了指导方针,表明应避免某些特征以确保对量子核进行有效评估,从而确保量子核方法的性能。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/8d9d/11189509/50a8248d8fe4/41467_2024_49287_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验