Suppr超能文献

协同多模态数据与指纹空间探索以进行作用机制预测。

Synergizing multimodal data and fingerprint space exploration for mechanism of action prediction.

作者信息

Hu Kaimiao, Wei Jianguo, Sun Changming, Geng Jie, Wei Leyi, Dai Qi, Su Ran

机构信息

College of Intelligence and Computing, Tianjin University, Tianjin, 300072, China.

CSIRO Data61, Sydney, 2000, Australia.

出版信息

Bioinformatics. 2025 Jun 2;41(6). doi: 10.1093/bioinformatics/btaf223.

Abstract

MOTIVATION

Effective computational methods for predicting the mechanism of action (MoA) of compounds are essential in drug discovery. Current MoA prediction models mainly utilize the structural information of compounds. However, high-throughput screening technologies have generated more targeted cell perturbation data for MoA prediction, a factor frequently disregarded by the majority of current approaches. Moreover, exploring the commonalities and specificities among different fingerprint representations remains challenging.

RESULTS

In this paper, we propose IFMoAP, a model integrating cell perturbation image and fingerprint data for MoA prediction. Firstly, we modify the Res-Net to accommodate the feature extraction of five-channel cell perturbation images and establish a granularity-level attention mechanism to combine coarse- and fine-grained features. To learn both common and specific fingerprint features, we introduce an FP-CS module, projecting four fingerprint embeddings into distinct spaces and incorporating two loss functions for effective learning. Finally, we construct two independent classifiers based on image and fingerprint features for prediction and for weighting the two prediction scores. Experimental results demonstrate that our model achieves highest accuracy of 0.941 when using multimodal data. The comparison with other methods and explorations further highlights the superiority of our proposed model and the complementary characteristics of multimodal data.

AVAILABILITY AND IMPLEMENTATION

The source code is available at https://github.com/ s1mplehu/IFMoAP. The raw image data of Cell Painting can be accessed from Figshare (https://doi.org/10.17044/scilifelab.21378906).

摘要

动机

有效的化合物作用机制(MoA)预测计算方法在药物发现中至关重要。当前的MoA预测模型主要利用化合物的结构信息。然而,高通量筛选技术已经产生了更多用于MoA预测的靶向细胞扰动数据,而这一因素在大多数当前方法中常常被忽视。此外,探索不同指纹表示之间的共性和特异性仍然具有挑战性。

结果

在本文中,我们提出了IFMoAP,一种整合细胞扰动图像和指纹数据用于MoA预测的模型。首先,我们修改了Res-Net以适应五通道细胞扰动图像的特征提取,并建立了粒度级注意力机制以结合粗粒度和细粒度特征。为了学习通用和特定的指纹特征,我们引入了FP-CS模块,将四种指纹嵌入投影到不同空间,并结合两个损失函数进行有效学习。最后,我们基于图像和指纹特征构建了两个独立的分类器用于预测,并对两个预测分数进行加权。实验结果表明,我们的模型在使用多模态数据时达到了最高0.941的准确率。与其他方法的比较和探索进一步突出了我们提出的模型的优越性以及多模态数据的互补特性。

可用性和实现

源代码可在https://github.com/s1mplehu/IFMoAP获取。细胞绘画的原始图像数据可从Figshare(https://doi.org/10.17044/scilifelab.21378906)获取。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f88b/12145173/ad30a16b246f/btaf223f1.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验