Tran Tuan, Ekenna Chinwe
Department of Computer Science, University at Albany, Albany, NY 12203, USA.
Int J Mol Sci. 2023 Jul 26;24(15):11948. doi: 10.3390/ijms241511948.
In this study, we introduce semi-supervised machine learning models designed to predict molecular properties. Our model employs a two-stage approach, involving pre-training and fine-tuning. Particularly, our model leverages a substantial amount of labeled and unlabeled data consisting of SMILES strings, a text representation system for molecules. During the pre-training stage, our model capitalizes on the Masked Language Model, which is widely used in natural language processing, for learning molecular chemical space representations. During the fine-tuning stage, our model is trained on a smaller labeled dataset to tackle specific downstream tasks, such as classification or regression. Preliminary results indicate that our model demonstrates comparable performance to state-of-the-art models on the chosen downstream tasks from MoleculeNet. Additionally, to reduce the computational overhead, we propose a new approach taking advantage of 3D compound structures for calculating the attention score used in the end-to-end transformer model to predict anti-malaria drug candidates. The results show that using the proposed attention score, our end-to-end model is able to have comparable performance with pre-trained models.
在本研究中,我们引入了旨在预测分子性质的半监督机器学习模型。我们的模型采用两阶段方法,包括预训练和微调。特别地,我们的模型利用了大量由SMILES字符串组成的标记和未标记数据,SMILES字符串是一种分子的文本表示系统。在预训练阶段,我们的模型利用在自然语言处理中广泛使用的掩码语言模型来学习分子化学空间表示。在微调阶段,我们的模型在较小的标记数据集上进行训练,以处理特定的下游任务,如分类或回归。初步结果表明,在从MoleculeNet中选择的下游任务上,我们的模型表现出与最先进模型相当的性能。此外,为了减少计算开销,我们提出了一种新方法,利用3D化合物结构来计算端到端变压器模型中用于预测抗疟疾药物候选物的注意力分数。结果表明,使用所提出的注意力分数,我们的端到端模型能够具有与预训练模型相当的性能。