University of Leuven (KU Leuven), Department for Pharmaceutical and Pharmacological Sciences, Pharmaceutical Analysis, Herestraat 49, 3000 Leuven, Belgium; Vrije Universiteit Brussel, Department of Chemical Engineering, Pleinlaan 2, 1050 Brussel, Belgium.
Vrije Universiteit Brussel, Department of Chemical Engineering, Pleinlaan 2, 1050 Brussel, Belgium.
J Chromatogr A. 2024 Jan 4;1713:464570. doi: 10.1016/j.chroma.2023.464570. Epub 2023 Dec 10.
Artificial intelligence and machine learning techniques are increasingly used for different tasks related to method development in liquid chromatography. In this study, the possibilities of a reinforcement learning algorithm, more specifically a deep deterministic policy gradient algorithm, are evaluated for the selection of scouting runs for retention time modeling. As a theoretical exercise, it is investigated whether such an algorithm can be trained to select scouting runs for any compound of interest allowing to retrieve its correct retention parameters for the three-parameter Neue-Kuss retention model. It is observed that three scouting runs are generally sufficient to retrieve the retention parameters with an accuracy (mean relative percentage error MRPE) of 1 % or less. When given the opportunity to select additional scouting runs, this does not lead to a significantly improved accuracy. It is also observed that the agent tends to give preference to isocratic scouting runs for retention time modeling, and is only motivated towards selecting gradient scouting runs when penalized (strongly) for large analysis/gradient times. This seems to reinforce the general power and usefulness of isocratic scouting runs for retention time modeling. Finally, the best results (lowest MRPE) are obtained when the agent manages to retrieve retention time data for % ACN at elution of the compound under consideration that spread the entire relevant range of ACN (5 % ACN to 95 % ACN) as well as possible, i.e., resulting in retention data at a low, intermediate and high % ACN. Based on the obtained results, we believe reinforcement learning holds great potential to automate and rationalize method development in liquid chromatography in the future.
人工智能和机器学习技术越来越多地用于与液相色谱法方法开发相关的不同任务。在这项研究中,评估了强化学习算法(更具体地说是深度确定性策略梯度算法)在保留时间建模中选择探查运行的可能性。作为理论练习,研究了该算法是否可以被训练为选择任何感兴趣的化合物的探查运行,以便为三参数 Neue-Kuss 保留模型检索其正确的保留参数。结果表明,通常只需要进行三次探查运行即可以 1%或更低的精度(平均相对百分比误差 MRPE)检索保留参数。当有机会选择额外的探查运行时,这并不会导致精度显著提高。还观察到,代理倾向于为保留时间建模选择等度探查运行,并且只有在因大的分析/梯度时间而受到强烈惩罚时,才会被激励选择梯度探查运行。这似乎进一步证实了等度探查运行在保留时间建模中的一般强大和有用性。最后,当代理设法检索到考虑到化合物洗脱时的 ACN%的保留时间数据时,获得了最佳结果(最低的 MRPE),该数据尽可能地扩展了整个相关的 ACN 范围(5% ACN 至 95% ACN),即导致在低、中和高 ACN%下获得保留数据。基于获得的结果,我们相信强化学习具有在未来自动化和合理化液相色谱法方法开发的巨大潜力。