• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于深度 Q 学习的液相色谱最优等度预选条件的选择。

Deep Q-learning for the selection of optimal isocratic scouting runs in liquid chromatography.

机构信息

University of Leuven (KU Leuven), Department for Pharmaceutical and Pharmacological Sciences, Pharmaceutical Analysis, Herestraat 49, 3000 Leuven, Belgium.

Vrije Universiteit Brussel, Department of Computer Science, Artificial Intelligence Lab, Pleinlaan 9, 1050 Brussel, Belgium.

出版信息

J Chromatogr A. 2021 Feb 8;1638:461900. doi: 10.1016/j.chroma.2021.461900. Epub 2021 Jan 13.

DOI:10.1016/j.chroma.2021.461900
PMID:33485027
Abstract

An important challenge in chromatography is the development of adequate separation methods. Accurate retention models can significantly simplify and expedite the development of adequate separation methods for complex mixtures. The purpose of this study was to introduce reinforcement learning to chromatographic method development, by training a double deep Q-learning algorithm to select optimal isocratic scouting runs to generate accurate retention models. These scouting runs were fit to the Neue-Kuss retention model, which was then used to predict retention factors both under isocratic and gradient conditions. The quality of these predictions was compared to experimental data points, by computing a mean relative percentage error (MRPE) between the predicted and actual retention factors. By providing the reinforcement learning algorithm with a reward whenever the scouting runs led to accurate retention models and a penalty when the analysis time of a selected scouting run was too high (> 1h); it was hypothesized that the reinforcement learning algorithm should by time learn to select good scouting runs for compounds displaying a variety of characteristics. The reinforcement learning algorithm developed in this work was first trained on simulated data, and then evaluated on experimental data for 57 small molecules - each run at 10 different fractions of organic modifier (0.05 to 0.90) and four different linear gradients. The results showed that the MRPE of these retention models (3.77% for isocratic runs and 1.93% for gradient runs), mostly obtained via 3 isocratic scouting runs for each compound, were comparable in performance to retention models obtained by fitting the Neue-Kuss model to all (10) available isocratic datapoints (3.26% for isocratic runs and 4.97% for gradient runs) and retention models obtained via a "chromatographer's selection" of three scouting runs (3.86% for isocratic runs and 6.66% for gradient runs). It was therefore concluded that the reinforcement learning algorithm learned to select optimal scouting runs for retention modeling, by selecting 3 (out of 10) isocratic scouting runs per compound, that were informative enough to successfully capture the retention behavior of each compound.

摘要

色谱学中的一个重要挑战是开发合适的分离方法。准确的保留模型可以大大简化和加快复杂混合物的合适分离方法的开发。本研究的目的是将强化学习引入色谱方法开发中,通过训练双深度 Q 学习算法来选择最佳等度预实验运行,以生成准确的保留模型。这些预实验运行拟合到 Neue-Kuss 保留模型中,然后用于预测等度和梯度条件下的保留因子。通过计算预测和实际保留因子之间的平均相对百分比误差 (MRPE),将这些预测的质量与实验数据点进行比较。通过在每次预实验运行导致准确的保留模型时为强化学习算法提供奖励,并在选择的预实验运行的分析时间过高 (>1 小时) 时给予惩罚;假设强化学习算法应该随着时间的推移学会为显示各种特征的化合物选择好的预实验运行。本工作中开发的强化学习算法首先在模拟数据上进行训练,然后在 57 个小分子的实验数据上进行评估-每个运行在 10 个不同的有机改性剂分数(0.05 至 0.90)和四个不同的线性梯度。结果表明,这些保留模型的 MRPE(等度运行时为 3.77%,梯度运行时为 1.93%),主要通过为每个化合物进行 3 次等度预实验运行获得,与通过拟合 Neue-Kuss 模型获得的所有(10)个可用等度数据点的保留模型(等度运行时为 3.26%,梯度运行时为 4.97%)和通过"色谱师选择"获得的保留模型(3.86%等度运行和 6.66%梯度运行)的性能相当。因此,得出结论,强化学习算法通过为每个化合物选择 3(10 个中的 3 个)等度预实验运行,学会选择用于保留建模的最佳预实验运行,这些预实验运行足以成功捕获每个化合物的保留行为。

相似文献

1
Deep Q-learning for the selection of optimal isocratic scouting runs in liquid chromatography.基于深度 Q 学习的液相色谱最优等度预选条件的选择。
J Chromatogr A. 2021 Feb 8;1638:461900. doi: 10.1016/j.chroma.2021.461900. Epub 2021 Jan 13.
2
A perspective on the use of deep deterministic policy gradient reinforcement learning for retention time modeling in reversed-phase liquid chromatography.关于在反相液相色谱中使用深度确定性策略梯度强化学习进行保留时间建模的观点。
J Chromatogr A. 2024 Jan 4;1713:464570. doi: 10.1016/j.chroma.2023.464570. Epub 2023 Dec 10.
3
Applicability of linear and nonlinear retention-time models for reversed-phase liquid chromatography separations of small molecules, peptides, and intact proteins.线性和非线性保留时间模型在反相液相色谱法分离小分子、肽和完整蛋白质中的适用性。
J Sep Sci. 2016 Apr;39(7):1249-57. doi: 10.1002/jssc.201501395.
4
On the inherent data fitting problems encountered in modeling retention behavior of analytes with dual retention mechanism.
J Chromatogr A. 2015 Jul 17;1403:81-95. doi: 10.1016/j.chroma.2015.05.031. Epub 2015 May 22.
5
Possibilities of retention modeling and computer assisted method development in supercritical fluid chromatography.超临界流体色谱中保留模型建立及计算机辅助方法开发的可能性
J Chromatogr A. 2015 Feb 13;1381:219-28. doi: 10.1016/j.chroma.2014.12.077. Epub 2015 Jan 7.
6
Generic approach to the method development of intact protein separations using hydrophobic interaction chromatography.使用疏水相互作用色谱法对完整蛋白质分离方法开发的通用方法。
J Sep Sci. 2018 Mar;41(5):1017-1024. doi: 10.1002/jssc.201701202. Epub 2017 Dec 27.
7
Retention modeling and method development in hydrophilic interaction chromatography.亲水作用色谱保留建模与方法开发。
J Chromatogr A. 2014 Apr 11;1337:116-27. doi: 10.1016/j.chroma.2014.02.032. Epub 2014 Feb 19.
8
Experimental design and re-parameterization of the Neue-Kuss model for accurate and precise prediction of isocratic retention factors from gradient measurements in reversed phase liquid chromatography.实验设计和 Neue-Kuss 模型的重新参数化,用于从反相液相色谱梯度测量中准确、精确地预测等度保留因子。
J Chromatogr A. 2023 Nov 22;1711:464443. doi: 10.1016/j.chroma.2023.464443. Epub 2023 Oct 11.
9
Advancing HIC method development: Retention-time modeling and tuning selectivity with ternary mobile-phase systems.推进 HIC 方法开发:使用三元流动相系统进行保留时间建模和选择性调整。
J Chromatogr A. 2024 Aug 16;1730:465133. doi: 10.1016/j.chroma.2024.465133. Epub 2024 Jun 30.
10
Accuracy of retention model parameters obtained from retention data in liquid chromatography.从液相色谱保留数据中获得的保留模型参数的准确性。
J Sep Sci. 2022 Sep;45(17):3241-3255. doi: 10.1002/jssc.202100911. Epub 2022 Apr 4.

引用本文的文献

1
Lake eutrophication prediction based on improved MIMO-DD-3Q Learning.基于改进型多输入多输出深度确定性策略梯度-3Q 学习的湖泊富营养化预测。
PLoS One. 2023 Nov 14;18(11):e0294278. doi: 10.1371/journal.pone.0294278. eCollection 2023.
2
Prediction of the performance of pre-packed purification columns through machine learning.通过机器学习预测预装纯化柱的性能。
J Sep Sci. 2022 Apr;45(8):1445-1457. doi: 10.1002/jssc.202100864. Epub 2022 Mar 20.