Center for Biomedical Imaging and Bioinformatics, School of Computer Science and Technology, Huazhong University of Science and Technology, Wuhan, 430074, China.
Institute of Artificial Intelligence, Huazhong University of Science and Technology, Wuhan, 430074, China.
Comput Biol Med. 2022 Oct;149:105964. doi: 10.1016/j.compbiomed.2022.105964. Epub 2022 Aug 19.
Multi-modal medical image segmentation has achieved great success through supervised deep learning networks. However, because of domain shift and limited annotation information, unpaired cross-modality segmentation tasks are still challenging. The unsupervised domain adaptation (UDA) methods can alleviate the segmentation degradation of cross-modality segmentation by knowledge transfer between different domains, but current methods still suffer from the problems of model collapse, adversarial training instability, and mismatch of anatomical structures. To tackle these issues, we propose a bidirectional multilayer contrastive adaptation network (BMCAN) for unpaired cross-modality segmentation. The shared encoder is first adopted for learning modality-invariant encoding representations in image synthesis and segmentation simultaneously. Secondly, to retain the anatomical structure consistency in cross-modality image synthesis, we present a structure-constrained cross-modality image translation approach for image alignment. Thirdly, we construct a bidirectional multilayer contrastive learning approach to preserve the anatomical structures and enhance encoding representations, which utilizes two groups of domain-specific multilayer perceptron (MLP) networks to learn modality-specific features. Finally, a semantic information adversarial learning approach is designed to learn structural similarities of semantic outputs for output space alignment. Our proposed method was tested on three different cross-modality segmentation tasks: brain tissue, brain tumor, and cardiac substructure segmentation. Compared with other UDA methods, experimental results show that our proposed BMCAN achieves state-of-the-art segmentation performance on the above three tasks, and it has fewer training components and better feature representations for overcoming overfitting and domain shift problems. Our proposed method can efficiently reduce the annotation burden of radiologists in cross-modality image analysis.
多模态医学图像分割通过监督深度学习网络取得了巨大成功。然而,由于领域转移和有限的标注信息,未配对的跨模态分割任务仍然具有挑战性。无监督域自适应(UDA)方法可以通过不同域之间的知识转移来缓解跨模态分割的分割降级,但当前的方法仍然存在模型崩溃、对抗训练不稳定和解剖结构不匹配的问题。为了解决这些问题,我们提出了一种用于未配对跨模态分割的双向多层对比适应网络(BMCAN)。首先采用共享编码器同时学习图像合成和分割中的模态不变编码表示。其次,为了保留跨模态图像合成中的解剖结构一致性,我们提出了一种结构约束的跨模态图像翻译方法进行图像对齐。第三,我们构建了一种双向多层对比学习方法来保留解剖结构并增强编码表示,该方法利用两组特定于模态的多层感知机(MLP)网络来学习特定于模态的特征。最后,设计了一种语义信息对抗学习方法来学习语义输出的结构相似性,以进行输出空间对齐。我们的方法在三个不同的跨模态分割任务上进行了测试:脑组织、脑肿瘤和心脏子结构分割。与其他 UDA 方法相比,实验结果表明,我们的 BMCAN 在上述三个任务上实现了最先进的分割性能,并且它具有更少的训练组件和更好的特征表示,以克服过拟合和领域转移问题。我们的方法可以有效地减少放射科医生在跨模态图像分析中的标注负担。