Suppr超能文献

基于辅助任务的量子控制深度强化学习

Auxiliary Task-Based Deep Reinforcement Learning for Quantum Control.

作者信息

Zhou Shumin, Ma Hailan, Kuang Sen, Dong Daoyi

出版信息

IEEE Trans Cybern. 2025 Jan 7;PP. doi: 10.1109/TCYB.2024.3521300.

Abstract

Due to its property of not requiring prior knowledge of the environment, reinforcement learning (RL) has significant potential for solving quantum control problems. In this work, we investigate the effectiveness of continuous control policies based on deep deterministic policy gradient. To achieve good control of quantum systems with high fidelity, we propose an auxiliary task-based deep RL (AT-DRL) for quantum control. In particular, we design an auxiliary task to predict the fidelity value, sharing partial parameters with the main network (from the main RL task). The auxiliary task learns synchronously with the main task, allowing one to extract intrinsic features of the environment, thus aiding the agent to achieve the desired state with high fidelity. To further enhance the control performance, we also design a guided reward function based on the fidelity of quantum states that enables gradual fidelity improvement. Numerical simulations demonstrate that the proposed AT-DRL can provide a good solution to the exploration of quantum dynamics. It not only achieves high task fidelities but also demonstrates fast learning rates. Moreover, AT-DRL has great potential in designing control pulses that achieve effective quantum state preparation.

摘要

由于强化学习(RL)具有无需预先了解环境的特性,因此在解决量子控制问题方面具有巨大潜力。在这项工作中,我们研究了基于深度确定性策略梯度的连续控制策略的有效性。为了以高保真度实现对量子系统的良好控制,我们提出了一种用于量子控制的基于辅助任务的深度强化学习(AT-DRL)。具体而言,我们设计了一个辅助任务来预测保真度值,并与主网络(来自主强化学习任务)共享部分参数。辅助任务与主任务同步学习,使人们能够提取环境的内在特征,从而帮助智能体以高保真度达到期望状态。为了进一步提高控制性能,我们还基于量子态的保真度设计了一个引导奖励函数,以实现保真度的逐步提高。数值模拟表明,所提出的AT-DRL能够为量子动力学的探索提供良好的解决方案。它不仅实现了高任务保真度,还展示了快速的学习速率。此外,AT-DRL在设计实现有效量子态制备的控制脉冲方面具有巨大潜力。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验