School of Information Science and Technology, Beijing Forestry University Beijing, Beijing 100107, China.
Institute of Automation Chinese Academy of Sciences Beijing, Beijing 100190, China.
Sensors (Basel). 2023 Dec 18;23(24):9904. doi: 10.3390/s23249904.
This paper studies the tactical decision-making model of short track speed skating based on deep reinforcement learning, so as to improve the competitive performance of corresponding short track speed skaters. Short track speed skating, a traditional discipline in the Winter Olympics since its establishment in 1988, has consistently garnered attention. As artificial intelligence continues to advance, the utilization of deep learning methods to enhance athletes' tactical decision-making capabilities has become increasingly prevalent. Traditional tactical decision techniques often rely on the experience and knowledge of coaches and video analysis methods that require a lot of time and effort. Consequently, this study proposes a scientific simulation environment for short track speed skating, that accurately simulates the physical attributes of the venue, the physiological fitness of the athletes, and the rules of the competition. The Double Deep Q-Network (DDQN) model is enhanced and utilized, with improvements to the reward function and the distinct description of four tactics. This enables agents to learn optimal tactical decisions in various competitive states with a simulation environment. Experimental results demonstrate that this approach effectively enhances the competition performance and physiological fitness allocation of short track speed skaters.
本文基于深度强化学习研究短道速滑战术决策模型,以提高相应短道速滑运动员的竞技表现。短道速滑自 1988 年成为冬奥会的传统项目以来,一直备受关注。随着人工智能的不断发展,利用深度学习方法来提高运动员的战术决策能力变得越来越普遍。传统的战术决策技术通常依赖于教练的经验和知识以及需要大量时间和精力的视频分析方法。因此,本研究提出了一个短道速滑的科学模拟环境,该环境准确模拟了场地的物理属性、运动员的生理适应性和比赛规则。增强和利用了双深度 Q 网络(Double Deep Q-Network,DDQN)模型,改进了奖励函数并对四种战术进行了明确描述。这使得代理能够在模拟环境中学习各种竞争状态下的最优战术决策。实验结果表明,这种方法有效地提高了短道速滑运动员的比赛表现和生理适应性分配。