Suppr超能文献

基于量子费希尔信息度量的量子机器学习模型泛化

Generalization of Quantum Machine Learning Models Using Quantum Fisher Information Metric.

作者信息

Haug Tobias, Kim M S

机构信息

Quantum Research Center, <a href="https://ror.org/001kv2y39">Technology Innovation Institute</a>, Abu Dhabi, United Arab Emirates.

Blackett Laboratory, <a href="https://ror.org/041kmwe10">Imperial College London</a>, London SW7 2AZ, United Kingdom.

出版信息

Phys Rev Lett. 2024 Aug 2;133(5):050603. doi: 10.1103/PhysRevLett.133.050603.

Abstract

Generalization is the ability of machine learning models to make accurate predictions on new data by learning from training data. However, understanding generalization of quantum machine learning models has been a major challenge. Here, we introduce the data quantum Fisher information metric (DQFIM). It describes the capacity of variational quantum algorithms depending on variational ansatz, training data, and their symmetries. We apply the DQFIM to quantify circuit parameters and training data needed to successfully train and generalize. Using the dynamical Lie algebra, we explain how to generalize using a low number of training states. Counterintuitively, breaking symmetries of the training data can help to improve generalization. Finally, we find that out-of-distribution generalization, where training and testing data are drawn from different data distributions, can be better than using the same distribution. Our work provides a useful framework to explore the power of quantum machine learning models.

摘要

泛化是机器学习模型通过从训练数据中学习对新数据做出准确预测的能力。然而,理解量子机器学习模型的泛化一直是一项重大挑战。在此,我们引入了数据量子费希尔信息度量(DQFIM)。它描述了取决于变分假设、训练数据及其对称性的变分量子算法的能力。我们应用DQFIM来量化成功训练和泛化所需的电路参数和训练数据。使用动态李代数,我们解释了如何使用少量训练态进行泛化。与直觉相反,打破训练数据的对称性有助于提高泛化能力。最后,我们发现,当训练数据和测试数据来自不同数据分布时的分布外泛化可能比使用相同分布更好。我们的工作提供了一个有用的框架来探索量子机器学习模型的能力。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验