Hu Kaimiao, Wei Jianguo, Sun Changming, Geng Jie, Wei Leyi, Dai Qi, Su Ran
College of Intelligence and Computing, Tianjin University, Tianjin, 300072, China.
CSIRO Data61, Sydney, 2000, Australia.
Bioinformatics. 2025 Jun 2;41(6). doi: 10.1093/bioinformatics/btaf223.
Effective computational methods for predicting the mechanism of action (MoA) of compounds are essential in drug discovery. Current MoA prediction models mainly utilize the structural information of compounds. However, high-throughput screening technologies have generated more targeted cell perturbation data for MoA prediction, a factor frequently disregarded by the majority of current approaches. Moreover, exploring the commonalities and specificities among different fingerprint representations remains challenging.
In this paper, we propose IFMoAP, a model integrating cell perturbation image and fingerprint data for MoA prediction. Firstly, we modify the Res-Net to accommodate the feature extraction of five-channel cell perturbation images and establish a granularity-level attention mechanism to combine coarse- and fine-grained features. To learn both common and specific fingerprint features, we introduce an FP-CS module, projecting four fingerprint embeddings into distinct spaces and incorporating two loss functions for effective learning. Finally, we construct two independent classifiers based on image and fingerprint features for prediction and for weighting the two prediction scores. Experimental results demonstrate that our model achieves highest accuracy of 0.941 when using multimodal data. The comparison with other methods and explorations further highlights the superiority of our proposed model and the complementary characteristics of multimodal data.
The source code is available at https://github.com/ s1mplehu/IFMoAP. The raw image data of Cell Painting can be accessed from Figshare (https://doi.org/10.17044/scilifelab.21378906).
有效的化合物作用机制(MoA)预测计算方法在药物发现中至关重要。当前的MoA预测模型主要利用化合物的结构信息。然而,高通量筛选技术已经产生了更多用于MoA预测的靶向细胞扰动数据,而这一因素在大多数当前方法中常常被忽视。此外,探索不同指纹表示之间的共性和特异性仍然具有挑战性。
在本文中,我们提出了IFMoAP,一种整合细胞扰动图像和指纹数据用于MoA预测的模型。首先,我们修改了Res-Net以适应五通道细胞扰动图像的特征提取,并建立了粒度级注意力机制以结合粗粒度和细粒度特征。为了学习通用和特定的指纹特征,我们引入了FP-CS模块,将四种指纹嵌入投影到不同空间,并结合两个损失函数进行有效学习。最后,我们基于图像和指纹特征构建了两个独立的分类器用于预测,并对两个预测分数进行加权。实验结果表明,我们的模型在使用多模态数据时达到了最高0.941的准确率。与其他方法的比较和探索进一步突出了我们提出的模型的优越性以及多模态数据的互补特性。