Xiang Dehui, Peng Tao, Bian Yun, Chen Lang, Zeng Jianbin, Shi Fei, Zhu Weifang, Chen Xinjian
IEEE Trans Biomed Eng. 2025 Feb;72(2):664-674. doi: 10.1109/TBME.2024.3467216. Epub 2025 Jan 21.
Multi-modal MR/CT image segmentation is an important task in disease diagnosis and treatment, but it is usually difficult to acquire aligned multi-modal images of a patient in clinical practice due to the high cost and specific allergic reactions to contrast agents. To address these issues, a task complementation framework is proposed to enable unpaired multi-modal image complementation learning in the training stage and single-modal image segmentation in the inference stage.
To fuse unpaired dual-modal images in the training stage and allow single-modal image segmentation in the inference stage, a synthesis-segmentation task complementation network is constructed to mutually facilitate cross-modal image synthesis and segmentation since the same content feature can be used to perform the image segmentation task and image synthesis task. To maintain the consistency of the target organ with varied shapes, a curvature consistency loss is proposed to align the segmentation predictions of the original image and the cross-modal synthesized image. To segment the small lesions or substructures, a regression-segmentation task complementation network is constructed to utilize the auxiliary feature of the target organ.
Comprehensive experiments have been performed with an in-house dataset and a publicly available dataset. The experimental results have demonstrated the superiority of our framework over state-of-the-art methods.
The proposed method can fuse dual-modal CT/MR images in the training stage and only needs single-modal CT/MR images in the inference stage.
The proposed method can be used in routine clinical occasions when only single-modal CT/MR image is available for a patient.
多模态磁共振成像/计算机断层扫描(MR/CT)图像分割是疾病诊断和治疗中的一项重要任务,但在临床实践中,由于成本高昂以及对造影剂存在特定过敏反应,通常难以获取患者的配准多模态图像。为解决这些问题,提出了一种任务互补框架,以在训练阶段实现非配对多模态图像互补学习,并在推理阶段实现单模态图像分割。
为在训练阶段融合非配对双模态图像,并在推理阶段实现单模态图像分割,构建了一个合成 - 分割任务互补网络,由于相同的内容特征可用于执行图像分割任务和图像合成任务,因此该网络可相互促进跨模态图像合成与分割。为保持目标器官形状变化时的一致性,提出了一种曲率一致性损失,以对齐原始图像和跨模态合成图像的分割预测。为分割小病变或子结构,构建了一个回归 - 分割任务互补网络,以利用目标器官的辅助特征。
使用内部数据集和公开可用数据集进行了综合实验。实验结果证明了我们的框架优于现有方法。
所提出的方法可以在训练阶段融合双模态CT/MR图像,而在推理阶段仅需要单模态CT/MR图像。
所提出的方法可用于临床常规情况,即当患者仅可获得单模态CT/MR图像时。