Li Jiaqi, Liao Lejian, Jia Meihuizi, Chen Zhendong, Liu Xin
School of Computer Science and Technology, Beijing Institute of Technology, Beijing 100081, China.
Beijing Engineering Research Center of High Volume Language Information Processing and Cloud Computing Applications, Beijing 100081, China.
iScience. 2024 Jul 15;27(8):110509. doi: 10.1016/j.isci.2024.110509. eCollection 2024 Aug 16.
Magnetic resonance imaging (MRI), ultrasound (US), and contrast-enhanced ultrasound (CEUS) can provide different image data about uterus, which have been used in the preoperative assessment of endometrial cancer. In practice, not all the patients have complete multi-modality medical images due to the high cost or long examination period. Most of the existing methods need to perform data cleansing or discard samples with missing modalities, which will influence the performance of the model. In this work, we propose an incomplete multi-modality images data fusion method based on latent relation shared to overcome this limitation. The shared space contains the common latent feature representation and modality-specific latent feature representation from the complete and incomplete multi-modality data, which jointly exploits both consistent and complementary information among multiple images. The experimental results show that our method outperforms the current representative approaches in terms of classification accuracy, sensitivity, specificity, and area under curve (AUC). Furthermore, our method performs well under varying imaging missing rates.
磁共振成像(MRI)、超声(US)和超声造影(CEUS)可以提供关于子宫的不同图像数据,这些数据已用于子宫内膜癌的术前评估。在实际应用中,由于成本高或检查周期长,并非所有患者都有完整的多模态医学图像。现有的大多数方法需要进行数据清理或丢弃具有缺失模态的样本,这将影响模型的性能。在这项工作中,我们提出了一种基于共享潜在关系的不完整多模态图像数据融合方法,以克服这一局限性。共享空间包含来自完整和不完整多模态数据的共同潜在特征表示和特定模态的潜在特征表示,它共同利用了多个图像之间的一致信息和互补信息。实验结果表明,我们的方法在分类准确率、灵敏度、特异性和曲线下面积(AUC)方面优于当前的代表性方法。此外,我们的方法在不同的成像缺失率下表现良好。