Xie Weidong, Fang Yushan, Yang Guicheng, Yu Kun, Li Wei
School of Computer Science and Engineering, Northeastern University, Hunnan District, Shenyang 110169, China.
College of Medicine and Bioinformation Engineering, Northeastern University, Hunnan District, Shenyang 110169, China.
Biomolecules. 2023 Sep 15;13(9):1391. doi: 10.3390/biom13091391.
As the number of modalities in biomedical data continues to increase, the significance of multi-modal data becomes evident in capturing complex relationships between biological processes, thereby complementing disease classification. However, the current multi-modal fusion methods for biomedical data require more effective exploitation of intra- and inter-modal interactions, and the application of powerful fusion methods to biomedical data is relatively rare. In this paper, we propose a novel multi-modal data fusion method that addresses these limitations. Our proposed method utilizes a graph neural network and a 3D convolutional network to identify intra-modal relationships. By doing so, we can extract meaningful features from each modality, preserving crucial information. To fuse information from different modalities, we employ the Low-rank Multi-modal Fusion method, which effectively integrates multiple modalities while reducing noise and redundancy. Additionally, our method incorporates the Cross-modal Transformer to automatically learn relationships between different modalities, facilitating enhanced information exchange and representation. We validate the effectiveness of our proposed method using lung CT imaging data and physiological and biochemical data obtained from patients diagnosed with Chronic Obstructive Pulmonary Disease (COPD). Our method demonstrates superior performance compared to various fusion methods and their variants in terms of disease classification accuracy.
随着生物医学数据中模态数量的不断增加,多模态数据在捕捉生物过程之间复杂关系从而辅助疾病分类方面的重要性日益凸显。然而,当前用于生物医学数据的多模态融合方法需要更有效地利用模态内和模态间的相互作用,并且将强大的融合方法应用于生物医学数据的情况相对较少。在本文中,我们提出了一种新颖的多模态数据融合方法来解决这些局限性。我们提出的方法利用图神经网络和三维卷积网络来识别模态内关系。通过这样做,我们可以从每个模态中提取有意义的特征,保留关键信息。为了融合来自不同模态的信息,我们采用低秩多模态融合方法,该方法在减少噪声和冗余的同时有效地整合了多个模态。此外,我们的方法结合了跨模态变换器来自动学习不同模态之间的关系,促进增强的信息交换和表示。我们使用从慢性阻塞性肺疾病(COPD)患者获得的肺部CT成像数据以及生理和生化数据来验证我们提出的方法的有效性。在疾病分类准确性方面,我们的方法与各种融合方法及其变体相比表现出卓越的性能。