YerevaNN, Charents str. 20, 0025 Yerevan, Armenia.
Toxometris.ai, Sarmen str. 7, 0019 Yerevan, Armenia.
J Chem Inf Model. 2024 Aug 12;64(15):5832-5843. doi: 10.1021/acs.jcim.4c00512. Epub 2024 Jul 25.
We discover a robust self-supervised strategy tailored toward molecular representations for generative masked language models through a series of tailored, in-depth ablations. Using this pretraining strategy, we train BARTSmiles, a BART-like model with an order of magnitude more compute than previous self-supervised molecular representations. In-depth evaluations show that BARTSmiles consistently outperforms other self-supervised representations across classification, regression, and generation tasks, setting a new state-of-the-art on eight tasks. We then show that when applied to the molecular domain, the BART objective learns representations that implicitly encode our downstream tasks of interest. For example, by selecting seven neurons from a frozen BARTSmiles, we can obtain a model having performance within two percentage points of the full fine-tuned model on task Clintox. Lastly, we show that standard attribution interpretability methods, when applied to BARTSmiles, highlight certain substructures that chemists use to explain specific properties of molecules. The code and pretrained model are publicly available.
我们通过一系列针对性的深入消融研究,发现了一种针对分子表示的强大自监督策略,适用于生成式掩蔽语言模型。我们使用这种预训练策略训练了 BARTSmiles,这是一个与之前的自监督分子表示相比,计算量增加了一个数量级的 BART 式模型。深入的评估表明,BARTSmiles 在分类、回归和生成任务中始终优于其他自监督表示,在八项任务上创下了新的技术水平。然后,我们表明,当应用于分子领域时,BART 目标学习的表示隐式地编码了我们感兴趣的下游任务。例如,通过从冻结的 BARTSmiles 中选择七个神经元,我们可以获得一个模型,在 Clintox 任务上的性能与全微调模型相差不到两个百分点。最后,我们表明,当应用于 BARTSmiles 时,标准的归因可解释性方法突出了化学家用于解释分子特定性质的某些结构。代码和预训练模型都是公开的。