Bian Xuesheng, Luo Xiongbiao, Wang Cheng, Liu Weiquan, Lin Xiuhong
Fujian Key Laboratory of Sensing and Computing for Smart Cities, Department of Computer Science, School of Informatics, Xiamen University, Xiamen 361005, China; National Institute for Data Science in Health and Medicine, Xiamen University, Xiamen 361005, China.
Fujian Key Laboratory of Sensing and Computing for Smart Cities, Department of Computer Science, School of Informatics, Xiamen University, Xiamen 361005, China.
Comput Methods Programs Biomed. 2022 Jan;213:106531. doi: 10.1016/j.cmpb.2021.106531. Epub 2021 Nov 14.
Deep convolutional networks are powerful tools for single-modality medical image segmentation, whereas generally require semantic labelling or annotation that is laborious and time-consuming. However, domain shift among various modalities critically deteriorates the performance of deep convolutional networks if only trained by single-modality labelling data.
In this paper, we propose an end-to-end unsupervised cross-modality segmentation network, DDA-Net, for accurate medical image segmentation without semantic annotation or labelling on the target domain. To close the domain gap, different images with domain shift are mapped into a shared domain-invariant representation space. In addition, spatial position information, which benefits the spatial structure consistency for semantic information, is preserved by an introduced cross-modality auto-encoder.
We validated the proposed DDA-Net method on cross-modality medical image datasets of brain images and heart images. The experimental results show that DDA-Net effectively alleviates domain shift and suppresses model degradation.
The proposed DDA-Net successfully closes the domain gap between different modalities of medical image, and achieves state-of-the-art performance in cross-modality medical image segmentation. It also can be generalized for other semi-supervised or unsupervised segmentation tasks in some other field.
深度卷积网络是单模态医学图像分割的强大工具,但通常需要语义标记或注释,这既费力又耗时。然而,如果仅通过单模态标记数据进行训练,不同模态之间的域转移会严重降低深度卷积网络的性能。
在本文中,我们提出了一种端到端的无监督跨模态分割网络DDA-Net,用于在无需对目标域进行语义注释或标记的情况下准确进行医学图像分割。为了缩小域差距,将具有域转移的不同图像映射到共享的域不变表示空间。此外,通过引入的跨模态自动编码器保留空间位置信息,这有利于语义信息的空间结构一致性。
我们在脑图像和心脏图像的跨模态医学图像数据集上验证了所提出的DDA-Net方法。实验结果表明,DDA-Net有效地缓解了域转移并抑制了模型退化。
所提出的DDA-Net成功地缩小了不同模态医学图像之间的域差距,并在跨模态医学图像分割中取得了领先的性能。它还可以推广到其他领域的其他半监督或无监督分割任务。