Yang Yang, Li Jiang, Hou Jinyong, Wang Ye, Zhao Huadong
Changchun Institute of Optics, Fine Mechanics and Physics, Chinese Academy of Sciences, Changchun 130033, China.
University of Chinese Academy of Sciences, Beijing 100049, China.
Sensors (Basel). 2023 Nov 30;23(23):9520. doi: 10.3390/s23239520.
Multi-agent reinforcement learning excels at addressing group intelligent decision-making problems involving sequential decision-making. In particular, in complex, high-dimensional state and action spaces, it imposes higher demands on the reliability, stability, and adaptability of decision algorithms. The reinforcement learning algorithm based on the multi-agent deep strategy gradient incorporates a function approximation method using discriminant networks. However, this can lead to estimation errors when agents evaluate action values, thereby reducing model reliability and stability and resulting in challenging convergence. With the increasing complexity of the environment, there is a decline in the quality of experience collected by the experience playback pool, resulting in low efficiency of the sampling stage and difficulties in algorithm convergence. To address these challenges, we propose an innovative approach called the empirical clustering layer-based multi-agent dual dueling policy gradient (ECL-MAD3PG) algorithm. Experimental results demonstrate that our ECL-MAD3PG algorithm outperforms other methods in various complex environments, demonstrating a remarkable 9.1% improvement in mission completion compared to MADDPG within the context of complex UAV cooperative combat scenarios.
多智能体强化学习擅长解决涉及序列决策的群体智能决策问题。特别是在复杂的高维状态和动作空间中,它对决策算法的可靠性、稳定性和适应性提出了更高的要求。基于多智能体深度策略梯度的强化学习算法采用了使用判别网络的函数逼近方法。然而,这可能会导致智能体在评估动作值时产生估计误差,从而降低模型的可靠性和稳定性,并导致收敛困难。随着环境复杂性的增加,经验回放池收集的经验质量下降,导致采样阶段效率低下和算法收敛困难。为了应对这些挑战,我们提出了一种创新方法,称为基于经验聚类层的多智能体双决斗策略梯度(ECL-MAD3PG)算法。实验结果表明,我们的ECL-MAD3PG算法在各种复杂环境中优于其他方法,在复杂无人机协同作战场景下,与MADDPG相比,任务完成率显著提高了9.1%。