• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于耗散感知机的量子神经网络的可训练性。

Trainability of Dissipative Perceptron-Based Quantum Neural Networks.

机构信息

Theoretical Division, Los Alamos National Laboratory, Los Alamos, New Mexico 87545, USA.

Hearne Institute for Theoretical Physics and Department of Physics and Astronomy, Louisiana State University, Baton Rouge, Louisiana 70803, USA.

出版信息

Phys Rev Lett. 2022 May 6;128(18):180505. doi: 10.1103/PhysRevLett.128.180505.

DOI:10.1103/PhysRevLett.128.180505
PMID:35594093
Abstract

Several architectures have been proposed for quantum neural networks (QNNs), with the goal of efficiently performing machine learning tasks on quantum data. Rigorous scaling results are urgently needed for specific QNN constructions to understand which, if any, will be trainable at a large scale. Here, we analyze the gradient scaling (and hence the trainability) for a recently proposed architecture that we call dissipative QNNs (DQNNs), where the input qubits of each layer are discarded at the layer's output. We find that DQNNs can exhibit barren plateaus, i.e., gradients that vanish exponentially in the number of qubits. Moreover, we provide quantitative bounds on the scaling of the gradient for DQNNs under different conditions, such as different cost functions and circuit depths, and show that trainability is not always guaranteed. Our work represents the first rigorous analysis of the scalability of a perceptron-based QNN.

摘要

已经提出了几种量子神经网络 (QNN) 的架构,其目标是在量子数据上有效地执行机器学习任务。为了了解哪些架构可以在大规模上进行训练,迫切需要针对特定的 QNN 结构进行严格的扩展结果。在这里,我们分析了最近提出的一种架构的梯度扩展(因此可训练性),我们称之为耗散 QNN(DQNN),其中每层的输入量子位在层的输出时被丢弃。我们发现 DQNN 可能表现出贫瘠的高原,即梯度在量子位数量上呈指数衰减。此外,我们在不同条件下(例如不同的成本函数和电路深度)为 DQNN 的梯度扩展提供了定量界限,并表明可训练性并不总是有保证的。我们的工作代表了对基于感知器的 QNN 的可扩展性的首次严格分析。

相似文献

1
Trainability of Dissipative Perceptron-Based Quantum Neural Networks.基于耗散感知机的量子神经网络的可训练性。
Phys Rev Lett. 2022 May 6;128(18):180505. doi: 10.1103/PhysRevLett.128.180505.
2
Presence and Absence of Barren Plateaus in Tensor-Network Based Machine Learning.张量网络机器学习中的贫瘠高原的存在与缺失。
Phys Rev Lett. 2022 Dec 30;129(27):270501. doi: 10.1103/PhysRevLett.129.270501.
3
Theory of overparametrization in quantum neural networks.量子神经网络中超参数化理论。
Nat Comput Sci. 2023 Jun;3(6):542-551. doi: 10.1038/s43588-023-00467-6. Epub 2023 Jun 26.
4
Quantum variational algorithms are swamped with traps.量子变分算法中存在大量陷阱。
Nat Commun. 2022 Dec 15;13(1):7760. doi: 10.1038/s41467-022-35364-5.
5
Cost function dependent barren plateaus in shallow parametrized quantum circuits.浅参数化量子电路中依赖成本函数的贫瘠高原。
Nat Commun. 2021 Mar 19;12(1):1791. doi: 10.1038/s41467-021-21728-w.
6
Stereoscopic scalable quantum convolutional neural networks.立体可扩展量子卷积神经网络。
Neural Netw. 2023 Aug;165:860-867. doi: 10.1016/j.neunet.2023.06.027. Epub 2023 Jun 28.
7
Efficient Measure for the Expressivity of Variational Quantum Algorithms.变分量子算法表达能力的有效度量。
Phys Rev Lett. 2022 Feb 25;128(8):080506. doi: 10.1103/PhysRevLett.128.080506.
8
The Dilemma of Quantum Neural Networks.量子神经网络的困境
IEEE Trans Neural Netw Learn Syst. 2024 Apr;35(4):5603-5615. doi: 10.1109/TNNLS.2022.3208313. Epub 2024 Apr 4.
9
Quantum Neural Networks and Topological Quantum Field Theories.量子神经网络与拓扑量子场论。
Neural Netw. 2022 Sep;153:164-178. doi: 10.1016/j.neunet.2022.05.028. Epub 2022 Jun 7.
10
Randomness-Enhanced Expressivity of Quantum Neural Networks.量子神经网络的随机性增强表现力
Phys Rev Lett. 2024 Jan 5;132(1):010602. doi: 10.1103/PhysRevLett.132.010602.

引用本文的文献

1
Does provable absence of barren plateaus imply classical simulability?可证明不存在贫瘠高原是否意味着经典可模拟性?
Nat Commun. 2025 Aug 25;16(1):7907. doi: 10.1038/s41467-025-63099-6.
2
AQEA-QAS: An Adaptive Quantum Evolutionary Algorithm for Quantum Architecture Search.AQEA-QAS:一种用于量子架构搜索的自适应量子进化算法。
Entropy (Basel). 2025 Jul 8;27(7):733. doi: 10.3390/e27070733.
3
Error mitigation in brainbox quantum autoencoders.脑盒量子自动编码器中的误差缓解
Sci Rep. 2025 Jan 17;15(1):2257. doi: 10.1038/s41598-024-84171-z.
4
A Lie algebraic theory of barren plateaus for deep parameterized quantum circuits.深度参数化量子电路的贫瘠高原的李代数理论。
Nat Commun. 2024 Aug 22;15(1):7172. doi: 10.1038/s41467-024-49909-3.
5
Exponential concentration in quantum kernel methods.量子核方法中的指数浓度。
Nat Commun. 2024 Jun 18;15(1):5200. doi: 10.1038/s41467-024-49287-w.
6
Understanding quantum machine learning also requires rethinking generalization.理解量子机器学习还需要重新思考泛化。
Nat Commun. 2024 Mar 13;15(1):2277. doi: 10.1038/s41467-024-45882-z.
7
Quantum Graph Neural Network Models for Materials Search.用于材料搜索的量子图神经网络模型
Materials (Basel). 2023 Jun 10;16(12):4300. doi: 10.3390/ma16124300.
8
Quantum Neural Network for Quantum Neural Computing.用于量子神经计算的量子神经网络。
Research (Wash D C). 2023 May 8;6:0134. doi: 10.34133/research.0134. eCollection 2023.
9
Generalization in quantum machine learning from few training data.基于少量训练数据的量子机器学习中的泛化
Nat Commun. 2022 Aug 22;13(1):4919. doi: 10.1038/s41467-022-32550-3.
10
Variational quantum classifiers through the lens of the Hessian.通过海森矩阵看变分量子分类器。
PLoS One. 2022 Jan 20;17(1):e0262346. doi: 10.1371/journal.pone.0262346. eCollection 2022.