IEEE Trans Med Imaging. 2022 Sep;41(9):2207-2216. doi: 10.1109/TMI.2022.3159264. Epub 2022 Aug 31.
Benefiting from the powerful expressive capability of graphs, graph-based approaches have been popularly applied to handle multi-modal medical data and achieved impressive performance in various biomedical applications. For disease prediction tasks, most existing graph-based methods tend to define the graph manually based on specified modality (e.g., demographic information), and then integrated other modalities to obtain the patient representation by Graph Representation Learning (GRL). However, constructing an appropriate graph in advance is not a simple matter for these methods. Meanwhile, the complex correlation between modalities is ignored. These factors inevitably yield the inadequacy of providing sufficient information about the patient's condition for a reliable diagnosis. To this end, we propose an end-to-end Multi-modal Graph Learning framework (MMGL) for disease prediction with multi-modality. To effectively exploit the rich information across multi-modality associated with the disease, modality-aware representation learning is proposed to aggregate the features of each modality by leveraging the correlation and complementarity between the modalities. Furthermore, instead of defining the graph manually, the latent graph structure is captured through an effective way of adaptive graph learning. It could be jointly optimized with the prediction model, thus revealing the intrinsic connections among samples. Our model is also applicable to the scenario of inductive learning for those unseen data. An extensive group of experiments on two disease prediction tasks demonstrates that the proposed MMGL achieves more favorable performance. The code of MMGL is available at https://github.com/SsGood/MMGL.
受益于图的强大表达能力,基于图的方法已广泛应用于处理多模态医学数据,并在各种生物医学应用中取得了令人印象深刻的性能。对于疾病预测任务,大多数现有的基于图的方法倾向于根据指定的模态(例如,人口统计信息)手动定义图,然后通过图表示学习(GRL)集成其他模态以获得患者表示。然而,对于这些方法来说,预先构建一个合适的图并不是一件简单的事情。同时,模态之间的复杂相关性也被忽略了。这些因素不可避免地导致提供关于患者病情的足够信息以进行可靠诊断的不足。为此,我们提出了一种用于多模态疾病预测的端到端多模态图学习框架(MMGL)。为了有效地利用与疾病相关的多模态中的丰富信息,我们提出了模态感知表示学习,通过利用模态之间的相关性和互补性来聚合每个模态的特征。此外,我们不是手动定义图,而是通过有效的自适应图学习方法来捕获潜在的图结构。它可以与预测模型一起进行联合优化,从而揭示样本之间的内在联系。我们的模型也适用于那些看不见的数据的归纳学习场景。在两个疾病预测任务上进行的广泛实验表明,所提出的 MMGL 实现了更优的性能。MMGL 的代码可在 https://github.com/SsGood/MMGL 上获得。