• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

使用化合物的可逆树表示和深度强化学习的分子设计方法。

Molecular Design Method Using a Reversible Tree Representation of Chemical Compounds and Deep Reinforcement Learning.

机构信息

Preferred Networks, Inc., 1-6-1 Otemachi, Chiyoda-ku, Tokyo 100-0004, Japan.

出版信息

J Chem Inf Model. 2022 Sep 12;62(17):4032-4048. doi: 10.1021/acs.jcim.2c00366. Epub 2022 Aug 12.

DOI:10.1021/acs.jcim.2c00366
PMID:35960209
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9472278/
Abstract

Automatic design of molecules with specific chemical and biochemical properties is an important process in material informatics and computational drug discovery. In this study, we designed a novel coarse-grained tree representation of molecules (Reversible Junction Tree; "RJT") for the aforementioned purposes, which is reversely convertible to the original molecule without external information. By leveraging this representation, we further formulated the molecular design and optimization problem as a tree-structure construction using deep reinforcement learning ("RJT-RL"). In this method, all of the intermediate and final states of reinforcement learning are convertible to valid molecules, which could efficiently guide the optimization process in simple benchmark tasks. We further examined the multiobjective optimization and fine-tuning of the reinforcement learning models using RJT-RL, demonstrating the applicability of our method to more realistic tasks in drug discovery.

摘要

自动设计具有特定化学和生化性质的分子是材料信息学和计算药物发现中的一个重要过程。在这项研究中,我们设计了一种新颖的分子粗粒度树表示(可逆连接树;“RJT”),用于实现上述目的,该表示可以在没有外部信息的情况下反向转换为原始分子。通过利用这种表示,我们进一步将分子设计和优化问题表述为使用深度强化学习(“RJT-RL”)的树结构构建。在这种方法中,强化学习的所有中间和最终状态都可以转换为有效的分子,这可以在简单的基准任务中有效地指导优化过程。我们进一步使用 RJT-RL 检查了强化学习模型的多目标优化和微调,证明了我们的方法在药物发现中更现实任务中的适用性。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0171/9472278/03fb419e13a6/ci2c00366_0011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0171/9472278/094076b51597/ci2c00366_0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0171/9472278/a25149bede2d/ci2c00366_0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0171/9472278/e6825c7886dc/ci2c00366_0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0171/9472278/048a93b36524/ci2c00366_0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0171/9472278/ad6e9b8e5c35/ci2c00366_0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0171/9472278/2a9a6353a117/ci2c00366_0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0171/9472278/3eb35e180dae/ci2c00366_0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0171/9472278/d4d4007c7440/ci2c00366_0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0171/9472278/90e65173871c/ci2c00366_0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0171/9472278/03fb419e13a6/ci2c00366_0011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0171/9472278/094076b51597/ci2c00366_0002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0171/9472278/a25149bede2d/ci2c00366_0003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0171/9472278/e6825c7886dc/ci2c00366_0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0171/9472278/048a93b36524/ci2c00366_0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0171/9472278/ad6e9b8e5c35/ci2c00366_0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0171/9472278/2a9a6353a117/ci2c00366_0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0171/9472278/3eb35e180dae/ci2c00366_0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0171/9472278/d4d4007c7440/ci2c00366_0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0171/9472278/90e65173871c/ci2c00366_0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0171/9472278/03fb419e13a6/ci2c00366_0011.jpg

相似文献

1
Molecular Design Method Using a Reversible Tree Representation of Chemical Compounds and Deep Reinforcement Learning.使用化合物的可逆树表示和深度强化学习的分子设计方法。
J Chem Inf Model. 2022 Sep 12;62(17):4032-4048. doi: 10.1021/acs.jcim.2c00366. Epub 2022 Aug 12.
2
Optimization of binding affinities in chemical space with generative pre-trained transformer and deep reinforcement learning.利用生成式预训练变换器和深度强化学习在化学空间中优化结合亲和力
F1000Res. 2024 Feb 20;12:757. doi: 10.12688/f1000research.130936.2. eCollection 2023.
3
Improving drug discovery with a hybrid deep generative model using reinforcement learning trained on a Bayesian docking approximation.使用基于贝叶斯对接近似的强化学习训练的混合深度生成模型来改进药物发现。
J Comput Aided Mol Des. 2023 Nov;37(11):507-517. doi: 10.1007/s10822-023-00523-3. Epub 2023 Aug 8.
4
Deep Reinforcement Learning and Its Neuroscientific Implications.深度强化学习及其神经科学意义。
Neuron. 2020 Aug 19;107(4):603-616. doi: 10.1016/j.neuron.2020.06.014. Epub 2020 Jul 13.
5
De Novo Drug Design Using Transformer-Based Machine Translation and Reinforcement Learning of an Adaptive Monte Carlo Tree Search.基于Transformer的机器翻译和自适应蒙特卡罗树搜索强化学习的从头药物设计
Pharmaceuticals (Basel). 2024 Jan 27;17(2):161. doi: 10.3390/ph17020161.
6
Evaluation of reinforcement learning in transformer-based molecular design.基于Transformer的分子设计中强化学习的评估
J Cheminform. 2024 Aug 8;16(1):95. doi: 10.1186/s13321-024-00887-0.
7
Deep reinforcement learning for de novo drug design.基于深度强化学习的从头药物设计。
Sci Adv. 2018 Jul 25;4(7):eaap7885. doi: 10.1126/sciadv.aap7885. eCollection 2018 Jul.
8
DRlinker: Deep Reinforcement Learning for Optimization in Fragment Linking Design.DRlinker:用于片段连接设计优化的深度强化学习
J Chem Inf Model. 2022 Dec 12;62(23):5907-5917. doi: 10.1021/acs.jcim.2c00982. Epub 2022 Nov 20.
9
De novo drug design as GPT language modeling: large chemistry models with supervised and reinforcement learning.从头开始的药物设计与 GPT 语言模型:具有监督和强化学习的大型化学模型。
J Comput Aided Mol Des. 2024 Apr 22;38(1):20. doi: 10.1007/s10822-024-00559-z.
10
GRELinker: A Graph-Based Generative Model for Molecular Linker Design with Reinforcement and Curriculum Learning.GRELinker:一种基于图的生成模型,用于通过强化学习和课程学习进行分子连接体设计。
J Chem Inf Model. 2024 Feb 12;64(3):666-676. doi: 10.1021/acs.jcim.3c01700. Epub 2024 Jan 19.

引用本文的文献

1
Computer-aided multi-objective optimization in small molecule discovery.小分子发现中的计算机辅助多目标优化
Patterns (N Y). 2023 Feb 10;4(2):100678. doi: 10.1016/j.patter.2023.100678.

本文引用的文献

1
De novo molecular design and generative models.从头分子设计与生成模型。
Drug Discov Today. 2021 Nov;26(11):2707-2715. doi: 10.1016/j.drudis.2021.05.019. Epub 2021 Jun 1.
2
CReM: chemically reasonable mutations framework for structure generation.CReM:用于结构生成的化学合理突变框架
J Cheminform. 2020 Apr 22;12(1):28. doi: 10.1186/s13321-020-00431-w.
3
Molecular Sets (MOSES): A Benchmarking Platform for Molecular Generation Models.分子集(MOSES):分子生成模型的基准测试平台。
Front Pharmacol. 2020 Dec 18;11:565644. doi: 10.3389/fphar.2020.565644. eCollection 2020.
4
Optimization of Molecules via Deep Reinforcement Learning.通过深度强化学习优化分子。
Sci Rep. 2019 Jul 24;9(1):10752. doi: 10.1038/s41598-019-47148-x.
5
GuacaMol: Benchmarking Models for de Novo Molecular Design.GuacaMol:从头设计分子的模型基准测试。
J Chem Inf Model. 2019 Mar 25;59(3):1096-1108. doi: 10.1021/acs.jcim.8b00839. Epub 2019 Mar 19.
6
Automatic Chemical Design Using a Data-Driven Continuous Representation of Molecules.使用数据驱动的分子连续表示法进行自动化学设计。
ACS Cent Sci. 2018 Feb 28;4(2):268-276. doi: 10.1021/acscentsci.7b00572. Epub 2018 Jan 12.
7
Molecular de-novo design through deep reinforcement learning.通过深度强化学习进行分子从头设计。
J Cheminform. 2017 Sep 4;9(1):48. doi: 10.1186/s13321-017-0235-x.
8
Better Informed Distance Geometry: Using What We Know To Improve Conformation Generation.更明智的距离几何:利用我们所了解的信息来改进构象生成。
J Chem Inf Model. 2015 Dec 28;55(12):2562-74. doi: 10.1021/acs.jcim.5b00654. Epub 2015 Nov 30.
9
OpenGrowth: An Automated and Rational Algorithm for Finding New Protein Ligands.OpenGrowth:一种用于寻找新蛋白质配体的自动化合理算法。
J Med Chem. 2016 May 12;59(9):4171-88. doi: 10.1021/acs.jmedchem.5b00886. Epub 2015 Sep 23.
10
Deep learning.深度学习。
Nature. 2015 May 28;521(7553):436-44. doi: 10.1038/nature14539.