Cheng Zhiming, Wang Shuai, Gao Yuhan, Zhu Zunjie, Yan Chenggang
School of Automation, Hangzhou Dianzi University, Hangzhou, 310018, China.
School of Cyberspace, Hangzhou Dianzi University, Hangzhou, 310018, China.
J Imaging Inform Med. 2024 Dec;37(6):3193-3207. doi: 10.1007/s10278-024-01088-9. Epub 2024 May 17.
Domain generalization (DG) for medical image segmentation due to privacy preservation prefers learning from a single-source domain and expects good robustness on unseen target domains. To achieve this goal, previous methods mainly use data augmentation to expand the distribution of samples and learn invariant content from them. However, most of these methods commonly perform global augmentation, leading to limited augmented sample diversity. In addition, the style of the augmented image is more scattered than the source domain, which may cause the model to overfit the style of the source domain. To address the above issues, we propose an invariant content representation network (ICRN) to enhance the learning of invariant content and suppress the learning of variability styles. Specifically, we first design a gamma correction-based local style augmentation (LSA) to expand the distribution of samples by augmenting foreground and background styles, respectively. Then, based on the augmented samples, we introduce invariant content learning (ICL) to learn generalizable invariant content from both augmented and source-domain samples. Finally, we design domain-specific batch normalization (DSBN) based style adversarial learning (SAL) to suppress the learning of preferences for source-domain styles. Experimental results show that our proposed method improves by 8.74% and 11.33% in overall dice coefficient (Dice) and reduces 15.88 mm and 3.87 mm in overall average surface distance (ASD) on two publicly available cross-domain datasets, Fundus and Prostate, compared to the state-of-the-art DG methods. The code is available at https://github.com/ZMC-IIIM/ICRN-DG .
由于隐私保护的原因,医学图像分割中的域泛化(DG)更倾向于从单一源域进行学习,并期望在未见的目标域上具有良好的鲁棒性。为了实现这一目标,先前的方法主要使用数据增强来扩展样本的分布并从中学习不变的内容。然而,这些方法大多进行全局增强,导致增强样本的多样性有限。此外,增强图像的风格比源域更加分散,这可能导致模型过度拟合源域的风格。为了解决上述问题,我们提出了一种不变内容表示网络(ICRN),以增强对不变内容的学习并抑制对可变风格的学习。具体来说,我们首先设计了一种基于伽马校正的局部风格增强(LSA),通过分别增强前景和背景风格来扩展样本的分布。然后,基于增强后的样本,我们引入不变内容学习(ICL),从增强样本和源域样本中学习可泛化的不变内容。最后,我们设计了基于特定域批量归一化(DSBN)的风格对抗学习(SAL),以抑制对源域风格偏好的学习。实验结果表明,与最先进的DG方法相比,我们提出的方法在两个公开可用的跨域数据集Fundus和Prostate上,总体骰子系数(Dice)提高了8.74%和11.33%,总体平均表面距离(ASD)减少了15.88毫米和3.87毫米。代码可在https://github.com/ZMC-IIIM/ICRN-DG获取。