Zhao Sizhe, Sun Qi, Yang Jinzhu, Yuan Yuliang, Huang Yan, Li Zhiqing
Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, Liaoning, China.
School of Computer Science and Engineering, Northeastern University, Shenyang, Liaoning, China.
Med Biol Eng Comput. 2025 Mar;63(3):609-627. doi: 10.1007/s11517-024-03195-9. Epub 2024 Oct 21.
Unsupervised domain adaptation (UDA) has received interest as a means to alleviate the burden of data annotation. Nevertheless, existing UDA segmentation methods exhibit performance degradation in fine intracranial vessel segmentation tasks due to the problem of structure mismatch in the image synthesis procedure. To improve the image synthesis quality and the segmentation performance, a novel UDA segmentation method with structure preservation approaches, named StruP-Net, is proposed. The StruP-Net employs adversarial learning for image synthesis and utilizes two domain-specific segmentation networks to enhance the semantic consistency between real images and synthesized images. Additionally, two distinct structure preservation approaches, feature-level structure preservation (F-SP) and image-level structure preservation (I-SP), are proposed to alleviate the problem of structure mismatch in the image synthesis procedure. The F-SP, composed of two domain-specific graph convolutional networks (GCN), focuses on providing feature-level constraints to enhance the structural similarity between real images and synthesized images. Meanwhile, the I-SP imposes constraints on structure similarity based on perceptual loss. The cross-modality experimental results from magnetic resonance angiography (MRA) images to computed tomography angiography (CTA) images indicate that StruP-Net achieves better segmentation performance compared with other state-of-the-art methods. Furthermore, high inference efficiency demonstrates the clinical application potential of StruP-Net. The code is available at https://github.com/Mayoiuta/StruP-Net .
无监督域适应(UDA)作为减轻数据标注负担的一种手段而受到关注。然而,由于图像合成过程中存在结构不匹配问题,现有的UDA分割方法在精细的颅内血管分割任务中表现出性能下降。为了提高图像合成质量和分割性能,提出了一种具有结构保留方法的新型UDA分割方法,称为StruP-Net。StruP-Net采用对抗学习进行图像合成,并利用两个特定域的分割网络来增强真实图像和合成图像之间的语义一致性。此外,还提出了两种不同的结构保留方法,即特征级结构保留(F-SP)和图像级结构保留(I-SP),以缓解图像合成过程中的结构不匹配问题。F-SP由两个特定域的图卷积网络(GCN)组成,专注于提供特征级约束,以增强真实图像和合成图像之间的结构相似性。同时,I-SP基于感知损失对结构相似性施加约束。从磁共振血管造影(MRA)图像到计算机断层血管造影(CTA)图像的跨模态实验结果表明,与其他现有先进方法相比,StruP-Net具有更好的分割性能。此外,高推理效率证明了StruP-Net的临床应用潜力。代码可在https://github.com/Mayoiuta/StruP-Net获取。