Suppr超能文献

通过深度强化学习优化分子。

Optimization of Molecules via Deep Reinforcement Learning.

机构信息

Department of Chemistry, Stanford University, Stanford, California, USA.

Work done during an internship at Google Research Applied Science, Mountain View, California, USA.

出版信息

Sci Rep. 2019 Jul 24;9(1):10752. doi: 10.1038/s41598-019-47148-x.

Abstract

We present a framework, which we call Molecule Deep Q-Networks (MolDQN), for molecule optimization by combining domain knowledge of chemistry and state-of-the-art reinforcement learning techniques (double Q-learning and randomized value functions). We directly define modifications on molecules, thereby ensuring 100% chemical validity. Further, we operate without pre-training on any dataset to avoid possible bias from the choice of that set. MolDQN achieves comparable or better performance against several other recently published algorithms for benchmark molecular optimization tasks. However, we also argue that many of these tasks are not representative of real optimization problems in drug discovery. Inspired by problems faced during medicinal chemistry lead optimization, we extend our model with multi-objective reinforcement learning, which maximizes drug-likeness while maintaining similarity to the original molecule. We further show the path through chemical space to achieve optimization for a molecule to understand how the model works.

摘要

我们提出了一个框架,称为分子深度 Q 网络(MolDQN),通过结合化学领域知识和最新的强化学习技术(双 Q 学习和随机化价值函数)来进行分子优化。我们直接对分子进行修改,从而确保 100%的化学有效性。此外,我们不在任何数据集上进行预训练,以避免因数据集选择而产生的潜在偏差。MolDQN 在基准分子优化任务中与其他几个最近发布的算法相比,取得了相当或更好的性能。然而,我们也认为,这些任务中的许多都不能代表药物发现中真正的优化问题。受药物化学先导优化中遇到的问题的启发,我们通过多目标强化学习扩展了我们的模型,在保持与原始分子相似性的同时最大化药物相似性。我们进一步展示了实现分子优化的化学空间路径,以了解模型的工作原理。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/fc6e/6656766/252179756191/41598_2019_47148_Fig1_HTML.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验