School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, PR China.
School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, 611731, PR China; Big Data Research Center, University of Electronic Science and Technology of China, Chengdu, 611731, PR China.
Comput Methods Programs Biomed. 2024 Dec;257:108400. doi: 10.1016/j.cmpb.2024.108400. Epub 2024 Sep 6.
Accurate prognosis prediction for cancer patients plays a significant role in the formulation of treatment strategies, considerably impacting personalized medicine. Recent advancements in this field indicate that integrating information from various modalities, such as genetic and clinical data, and developing multi-modal deep learning models can enhance prediction accuracy. However, most existing multi-modal deep learning methods either overlook patient similarities that benefit prognosis prediction or fail to effectively capture diverse information due to measuring patient similarities from a single perspective. To address these issues, a novel framework called multi-modal multi-view graph convolutional networks (MMGCN) is proposed for cancer prognosis prediction.
Initially, we utilize the similarity network fusion (SNF) algorithm to merge patient similarity networks (PSNs), individually constructed using gene expression, copy number alteration, and clinical data, into a fused PSN for integrating multi-modal information. To capture diverse perspectives of patient similarities, we treat the fused PSN as a multi-view graph by considering each single-edge-type subgraph as a view graph, and propose multi-view graph convolutional networks (GCNs) with a view-level attention mechanism. Moreover, an edge homophily prediction module is designed to alleviate the adverse effects of heterophilic edges on the representation power of GCNs. Finally, comprehensive representations of patient nodes are obtained to predict cancer prognosis.
Experimental results demonstrate that MMGCN outperforms state-of-the-art baselines on four public datasets, including METABRIC, TCGA-BRCA, TCGA-LGG, and TCGA-LUSC, with the area under the receiver operating characteristic curve achieving 0.827 ± 0.005, 0.805 ± 0.014, 0.925 ± 0.007, and 0.746 ± 0.013, respectively.
Our study reveals the effectiveness of the proposed MMGCN, which deeply explores patient similarities related to different modalities from a broad perspective, in enhancing the performance of multi-modal cancer prognosis prediction. The source code is publicly available at https://github.com/ping-y/MMGCN.
对癌症患者进行准确的预后预测对于制定治疗策略具有重要意义,这对个性化医疗有很大影响。该领域的最新进展表明,整合来自多种模式的信息,如基因和临床数据,并开发多模态深度学习模型,可以提高预测准确性。然而,大多数现有的多模态深度学习方法要么忽略了有利于预后预测的患者相似性,要么由于从单一角度衡量患者相似性而未能有效地捕获多样化的信息。为了解决这些问题,我们提出了一种新的框架,称为多模态多视图图卷积网络(MMGCN),用于癌症预后预测。
首先,我们利用相似网络融合(SNF)算法将分别使用基因表达、拷贝数改变和临床数据构建的患者相似性网络(PSN)融合到一个融合 PSN 中,以整合多模态信息。为了捕捉患者相似性的不同视角,我们将融合 PSN 视为一个多视图图,将每个单边型子图视为一个视图图,并提出了具有视图级注意力机制的多视图图卷积网络(GCN)。此外,我们设计了边同质性预测模块来减轻异质边对 GCN 表示能力的不利影响。最后,获得患者节点的综合表示以预测癌症预后。
实验结果表明,MMGCN 在四个公共数据集上优于最先进的基线,包括 METABRIC、TCGA-BRCA、TCGA-LGG 和 TCGA-LUSC,在接收器工作特征曲线下的面积分别达到 0.827 ± 0.005、0.805 ± 0.014、0.925 ± 0.007 和 0.746 ± 0.013。
我们的研究表明,所提出的 MMGCN 从广泛的角度深入挖掘与不同模式相关的患者相似性,在提高多模态癌症预后预测的性能方面是有效的。源代码可在 https://github.com/ping-y/MMGCN 上获得。