Department of Urology, Stanford University, 300 Pasteur Drive, Stanford, CA 94305, USA.
Centre for Medical Image Computing, Wellcome/EPSRC Centre for Interventional & Surgical Sciences, and Department of Medical Physics & Biomedical Engineering, University College London, 66-72 Gower St, London WC1E 6EA, UK.
Med Image Anal. 2022 Nov;82:102620. doi: 10.1016/j.media.2022.102620. Epub 2022 Sep 13.
Prostate biopsy and image-guided treatment procedures are often performed under the guidance of ultrasound fused with magnetic resonance images (MRI). Accurate image fusion relies on accurate segmentation of the prostate on ultrasound images. Yet, the reduced signal-to-noise ratio and artifacts (e.g., speckle and shadowing) in ultrasound images limit the performance of automated prostate segmentation techniques and generalizing these methods to new image domains is inherently difficult. In this study, we address these challenges by introducing a novel 2.5D deep neural network for prostate segmentation on ultrasound images. Our approach addresses the limitations of transfer learning and finetuning methods (i.e., drop in performance on the original training data when the model weights are updated) by combining a supervised domain adaptation technique and a knowledge distillation loss. The knowledge distillation loss allows the preservation of previously learned knowledge and reduces the performance drop after model finetuning on new datasets. Furthermore, our approach relies on an attention module that considers model feature positioning information to improve the segmentation accuracy. We trained our model on 764 subjects from one institution and finetuned our model using only ten subjects from subsequent institutions. We analyzed the performance of our method on three large datasets encompassing 2067 subjects from three different institutions. Our method achieved an average Dice Similarity Coefficient (Dice) of 94.0±0.03 and Hausdorff Distance (HD95) of 2.28 mm in an independent set of subjects from the first institution. Moreover, our model generalized well in the studies from the other two institutions (Dice: 91.0±0.03; HD95: 3.7 mm and Dice: 82.0±0.03; HD95: 7.1 mm). We introduced an approach that successfully segmented the prostate on ultrasound images in a multi-center study, suggesting its clinical potential to facilitate the accurate fusion of ultrasound and MRI images to drive biopsy and image-guided treatments.
前列腺活检和图像引导治疗程序通常在超声与磁共振图像(MRI)融合的指导下进行。准确的图像融合依赖于超声图像上前列腺的精确分割。然而,超声图像中的信噪比降低和伪影(例如斑点和阴影)限制了自动前列腺分割技术的性能,并且将这些方法推广到新的图像领域具有内在的困难。在这项研究中,我们通过引入一种新的 2.5D 深度神经网络来解决超声图像上前列腺分割的挑战。我们的方法通过结合有监督的域自适应技术和知识蒸馏损失来解决迁移学习和微调方法的局限性(即,当模型权重更新时,原始训练数据的性能下降)。知识蒸馏损失允许保留以前学到的知识,并减少在新数据集上对模型进行微调后的性能下降。此外,我们的方法依赖于一个注意力模块,该模块考虑模型特征定位信息以提高分割准确性。我们在一个机构的 764 名受试者上训练我们的模型,并仅在随后的机构的 10 名受试者上微调我们的模型。我们在来自三个不同机构的 2067 名受试者的三个大型数据集上分析了我们方法的性能。我们的方法在来自第一个机构的一组独立受试者中达到了平均 Dice 相似系数(Dice)为 94.0±0.03 和 Hausdorff 距离(HD95)为 2.28mm。此外,我们的模型在来自其他两个机构的研究中也很好地推广了(Dice:91.0±0.03;HD95:3.7mm 和 Dice:82.0±0.03;HD95:7.1mm)。我们引入了一种成功地在多中心研究中对超声图像上的前列腺进行分割的方法,这表明其在临床应用中的潜力,可以促进超声和 MRI 图像的精确融合,以指导活检和图像引导治疗。