Suppr超能文献

RM-GPT:通过 LocalRNN 和 RealFormer 增强分子 GPT 模型的综合生成能力。

RM-GPT: Enhance the comprehensive generative ability of molecular GPT model via LocalRNN and RealFormer.

机构信息

School of Computer Science and Technology, Soochow University, Suzhou, 215006, China.

出版信息

Artif Intell Med. 2024 Apr;150:102827. doi: 10.1016/j.artmed.2024.102827. Epub 2024 Feb 27.

Abstract

Due to the surging of cost, artificial intelligence-assisted de novo drug design has supplanted conventional methods and become an emerging option for drug discovery. Although there have arisen many successful examples of applying generative models to the molecular field, these methods struggle to deal with conditional generation that meet chemists' practical requirements which ask for a controllable process to generate new molecules or optimize basic molecules with appointed conditions. To address this problem, a Recurrent Molecular-Generative Pretrained Transformer model is proposed, supplemented by LocalRNN and Residual Attention Layer Transformer, referred to as RM-GPT. RM-GPT rebuilds GPT model's architecture by incorporating LocalRNN and Residual Attention Layer Transformer so that it is able to extract local information and build connectivity between attention blocks. The incorporation of Transformer in these two modules enables leveraging the parallel computing advantages of multi-head attention mechanisms while extracting local structural information effectively. Through exploring and learning in a large chemical space, RM-GPT absorbs the ability to generate drug-like molecules with conditions in demand, such as desired properties and scaffolds, precisely and stably. RM-GPT achieved better results than SOTA methods on conditional generation.

摘要

由于成本的飙升,人工智能辅助从头药物设计已经取代了传统方法,成为药物发现的一种新兴选择。尽管已经有许多成功的应用生成模型到分子领域的例子,但这些方法难以处理满足化学家实际需求的条件生成,化学家需要一个可控的过程来生成新分子或优化具有指定条件的基本分子。为了解决这个问题,提出了一种基于循环分子生成式预训练转换器(Recurrent Molecular-Generative Pretrained Transformer)的模型,简称 RM-GPT。该模型通过引入局部循环神经网络(LocalRNN)和残差注意力层转换器(Residual Attention Layer Transformer)来重建 GPT 模型的架构,从而能够提取局部信息并在注意力块之间建立连接。在这两个模块中引入 Transformer 能够利用多头注意力机制的并行计算优势,同时有效地提取局部结构信息。通过在大型化学空间中进行探索和学习,RM-GPT 吸收了生成具有所需性质和骨架等条件的药物样分子的能力,可以精确和稳定地生成这些分子。在条件生成方面,RM-GPT 比 SOTA 方法取得了更好的结果。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验