Huang Xiaoqiong, Yang Xin, Dou Haoran, Huang Yuhao, Zhang Li, Liu Zhendong, Yan Zhongnuo, Liu Lian, Zou Yuxin, Hu Xindi, Gao Rui, Zhang Yuanji, Xiong Yi, Xue Wufeng, Ni Dong
National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen, China; Medical Ultrasound Image Computing (MUSIC) Laboratory, Shenzhen University, Shenzhen, China; Marshall Laboratory of Biomedidcal Engineering, Shenzhen University, Shenzhen, China.
Centre for Computational Imaging and Simulation Technologies in Biomedicine (CISTIB), University of Leeds, UK.
Comput Methods Programs Biomed. 2023 May;233:107477. doi: 10.1016/j.cmpb.2023.107477. Epub 2023 Mar 14.
Deep learning models often suffer from performance degradations when deployed in real clinical environments due to appearance shifts between training and testing images. Most extant methods use training-time adaptation, which almost require target domain samples in the training phase. However, these solutions are limited by the training process and cannot guarantee the accurate prediction of test samples with unforeseen appearance shifts. Further, it is impractical to collect target samples in advance. In this paper, we provide a general method of making existing segmentation models robust to samples with unknown appearance shifts when deployed in daily clinical practice.
Our proposed test-time bi-directional adaptation framework combines two complementary strategies. First, our image-to-model (I2M) adaptation strategy adapts appearance-agnostic test images to the learned segmentation model using a novel plug-and-play statistical alignment style transfer module during testing. Second, our model-to-image (M2I) adaptation strategy adapts the learned segmentation model to test images with unknown appearance shifts. This strategy applies an augmented self-supervised learning module to fine-tune the learned model with proxy labels that it generates. This innovative procedure can be adaptively constrained using our novel proxy consistency criterion. This complementary I2M and M2I framework demonstrably achieves robust segmentation against unknown appearance shifts using existing deep-learning models.
Extensive experiments on 10 datasets containing fetal ultrasound, chest X-ray, and retinal fundus images demonstrate that our proposed method achieves promising robustness and efficiency in segmenting images with unknown appearance shifts.
To address the appearance shift problem in clinically acquired medical images, we provide robust segmentation by using two complementary strategies. Our solution is general and amenable for deployment in clinical settings.
深度学习模型在实际临床环境中部署时,由于训练图像和测试图像之间的外观差异,常常会出现性能下降的情况。大多数现有方法采用训练时自适应,这几乎需要在训练阶段使用目标域样本。然而,这些解决方案受到训练过程的限制,无法保证对具有不可预见外观差异的测试样本进行准确预测。此外,提前收集目标样本是不切实际的。在本文中,我们提供了一种通用方法,使现有的分割模型在日常临床实践中部署时,对具有未知外观差异的样本具有鲁棒性。
我们提出的测试时双向自适应框架结合了两种互补策略。首先,我们的图像到模型(I2M)自适应策略在测试期间使用一种新颖的即插即用统计对齐风格迁移模块,将外观不可知的测试图像适配到已学习的分割模型。其次,我们的模型到图像(M2I)自适应策略将已学习的分割模型适配到具有未知外观差异的测试图像。该策略应用一个增强的自监督学习模块,使用其生成的代理标签对已学习的模型进行微调。这种创新过程可以使用我们新颖的代理一致性准则进行自适应约束。这种互补的I2M和M2I框架使用现有的深度学习模型,在针对未知外观差异进行分割时,显著实现了鲁棒性。
在包含胎儿超声、胸部X光和视网膜眼底图像的10个数据集上进行的大量实验表明,我们提出的方法在分割具有未知外观差异的图像时,实现了有前景的鲁棒性和效率。
为了解决临床获取的医学图像中的外观差异问题,我们通过使用两种互补策略提供了鲁棒分割。我们的解决方案具有通用性,适合在临床环境中部署。