IEEE Trans Cybern. 2022 Sep;52(9):9326-9338. doi: 10.1109/TCYB.2021.3053414. Epub 2022 Aug 18.
In this article, a novel training paradigm inspired by quantum computation is proposed for deep reinforcement learning (DRL) with experience replay. In contrast to the traditional experience replay mechanism in DRL, the proposed DRL with quantum-inspired experience replay (DRL-QER) adaptively chooses experiences from the replay buffer according to the complexity and the replayed times of each experience (also called transition), to achieve a balance between exploration and exploitation. In DRL-QER, transitions are first formulated in quantum representations and then the preparation operation and depreciation operation are performed on the transitions. In this process, the preparation operation reflects the relationship between the temporal-difference errors (TD-errors) and the importance of the experiences, while the depreciation operation is taken into account to ensure the diversity of the transitions. The experimental results on Atari 2600 games show that DRL-QER outperforms state-of-the-art algorithms, such as DRL-PER and DCRL on most of these games with improved training efficiency and is also applicable to such memory-based DRL approaches as double network and dueling network.
在本文中,提出了一种受量子计算启发的新训练范例,用于具有经验回放的深度强化学习(DRL)。与 DRL 中的传统经验回放机制不同,所提出的基于量子启发的经验回放(DRL-QER)根据每个经验(也称为转换)的复杂性和重放次数自适应地从回放缓冲区中选择经验,以在探索和利用之间取得平衡。在 DRL-QER 中,首先将转换形式化为量子表示,然后对转换执行准备操作和折旧操作。在这个过程中,准备操作反映了时间差分误差(TD-errors)与经验重要性之间的关系,而折旧操作则考虑了确保转换的多样性。在 Atari 2600 游戏上的实验结果表明,DRL-QER 在大多数游戏中都优于最先进的算法,如 DRL-PER 和 DCRL,具有更高的训练效率,并且也适用于双网络和决斗网络等基于内存的 DRL 方法。