Suppr超能文献

用于学习作业车间调度调度的离线强化学习。

Offline reinforcement learning for learning to dispatch for job shop scheduling.

作者信息

Remmerden Jesse van, Bukhsh Zaharah, Zhang Yingqian

机构信息

Information Systems IE&IS, Eindhoven University of Technology, De Zaale, Eindhoven, 5600 MB Netherlands.

出版信息

Mach Learn. 2025;114(8):191. doi: 10.1007/s10994-025-06826-w. Epub 2025 Jul 15.

Abstract

The Job Shop Scheduling Problem (JSSP) is a complex combinatorial optimization problem. While online Reinforcement Learning (RL) has shown promise by quickly finding acceptable solutions for JSSP, it faces key limitations: it requires extensive training interactions from scratch leading to sample inefficiency, cannot leverage existing high-quality solutions from traditional methods like Constraint Programming (CP), and require simulated environments to train in, which are impracticable to build for complex scheduling environments. We introduce Offline Learned Dispatching (Offline-LD), an offline reinforcement learning approach for JSSP, which addresses these limitations by learning from historical scheduling data. Our approach is motivated by scenarios where historical scheduling data and expert solutions are available or scenarios where online training of RL approaches with simulated environments is impracticable. Offline-LD introduces maskable variants of two Q-learning methods, namely, Maskable Quantile Regression DQN (mQRDQN) and discrete maskable Soft Actor-Critic (d-mSAC), that are able to learn from historical data, through Conservative Q-Learning (CQL), whereby we present a novel entropy bonus modification for d-mSAC, for maskable action spaces. Moreover, we introduce a novel reward normalization method for JSSP in an offline RL setting. Our experiments demonstrate that Offline-LD outperforms online RL on both generated and benchmark instances when trained on only 100 solutions generated by CP. Notably, introducing noise to the expert dataset yields comparable or superior results to using the expert dataset, with the same amount of instances, a promising finding for real-world applications, where data is inherently noisy and imperfect.

摘要

作业车间调度问题(JSSP)是一个复杂的组合优化问题。虽然在线强化学习(RL)通过快速为JSSP找到可接受的解决方案显示出了前景,但它面临着关键限制:它需要从头开始进行广泛的训练交互,导致样本效率低下,无法利用约束规划(CP)等传统方法现有的高质量解决方案,并且需要在模拟环境中进行训练,而对于复杂的调度环境来说构建这样的环境是不切实际的。我们引入了离线学习调度(Offline-LD),一种用于JSSP的离线强化学习方法,它通过从历史调度数据中学习来解决这些限制。我们的方法适用于有历史调度数据和专家解决方案可用的场景,或者RL方法在模拟环境中进行在线训练不切实际的场景。Offline-LD引入了两种Q学习方法的可掩码变体,即可掩码分位数回归深度Q网络(mQRDQN)和离散可掩码软演员评论家(d-mSAC),它们能够通过保守Q学习(CQL)从历史数据中学习,为此我们针对可掩码动作空间为d-mSAC提出了一种新颖的熵奖励修正。此外,我们在离线RL设置中为JSSP引入了一种新颖的奖励归一化方法。我们的实验表明,当仅在由CP生成的100个解决方案上进行训练时,Offline-LD在生成的实例和基准实例上均优于在线RL。值得注意的是,在专家数据集中引入噪声会产生与使用专家数据集相当或更好的结果,在相同数量的实例下,这对于实际应用是一个有前景的发现,因为实际应用中的数据本质上是有噪声和不完美的。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/aba4/12263752/653171e0d9d8/10994_2025_6826_Figa_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验