Orlando Nathan, Gillies Derek J, Gyacskov Igor, Romagnoli Cesare, D'Souza David, Fenster Aaron
Department of Medical Biophysics, Western University, London, ON, N6A 3K7, Canada.
Robarts Research Institute, Western University, London, ON, N6A 3K7, Canada.
Med Phys. 2020 Jun;47(6):2413-2426. doi: 10.1002/mp.14134. Epub 2020 Apr 8.
Needle-based procedures for diagnosing and treating prostate cancer, such as biopsy and brachytherapy, have incorporated three-dimensional (3D) transrectal ultrasound (TRUS) imaging to improve needle guidance. Using these images effectively typically requires the physician to manually segment the prostate to define the margins used for accurate registration, targeting, and other guidance techniques. However, manual prostate segmentation is a time-consuming and difficult intraoperative process, often occurring while the patient is under sedation (biopsy) or anesthetic (brachytherapy). Minimizing procedure time with a 3D TRUS prostate segmentation method could provide physicians with a quick and accurate prostate segmentation, and allow for an efficient workflow with improved patient throughput to enable faster patient access to care. The purpose of this study was to develop a supervised deep learning-based method to segment the prostate in 3D TRUS images from different facilities, generated using multiple acquisition methods and commercial ultrasound machine models to create a generalizable algorithm for needle-based prostate cancer procedures.
Our proposed method for 3D segmentation involved prediction on two-dimensional (2D) slices sampled radially around the approximate central axis of the prostate, followed by reconstruction into a 3D surface. A 2D U-Net was modified, trained, and validated using images from 84 end-fire and 122 side-fire 3D TRUS images acquired during clinical biopsies and brachytherapy procedures. Modifications to the expansion section of the standard U-Net included the addition of 50% dropouts and the use of transpose convolutions instead of standard upsampling followed by convolution to reduce overfitting and improve performance, respectively. Manual contours provided the annotations needed for the training, validation, and testing datasets, with the testing dataset consisting of 20 end-fire and 20 side-fire unseen 3D TRUS images. Since predicting with 2D images has the potential to lose spatial and structural information, comparisons to 3D reconstruction and optimized 3D networks including 3D V-Net, Dense V-Net, and High-resolution 3D-Net were performed following an investigation into different loss functions. An extended selection of absolute and signed error metrics were computed, including pixel map comparisons [dice similarity coefficient (DSC), recall, and precision], volume percent differences (VPD), mean surface distance (MSD), and Hausdorff distance (HD), to assess 3D segmentation accuracy.
Overall, our proposed reconstructed modified U-Net performed with a median [first quartile, third quartile] absolute DSC, recall, precision, VPD, MSD, and HD of 94.1 [92.6, 94.9]%, 96.0 [93.1, 98.5]%, 93.2 [88.8, 95.4]%, 5.78 [2.49, 11.50]%, 0.89 [0.73, 1.09] mm, and 2.89 [2.37, 4.35] mm, respectively. When compared to the best-performing optimized 3D network (i.e., 3D V-Net with a Dice plus cross-entropy loss function), our proposed method performed with a significant improvement across nearly all metrics. A computation time <0.7 s per prostate was observed, which is a sufficiently short segmentation time for intraoperative implementation.
Our proposed algorithm was able to provide a fast and accurate 3D segmentation across variable 3D TRUS prostate images, enabling a generalizable intraoperative solution for needle-based prostate cancer procedures. This method has the potential to decrease procedure times, supporting the increasing interest in needle-based 3D TRUS approaches.
用于诊断和治疗前列腺癌的基于针的操作,如活检和近距离放射治疗,已采用三维(3D)经直肠超声(TRUS)成像来改善针的引导。有效使用这些图像通常需要医生手动分割前列腺,以定义用于精确配准、靶向和其他引导技术的边界。然而,手动前列腺分割是一个耗时且困难的术中过程,通常在患者处于镇静状态(活检)或麻醉状态(近距离放射治疗)时进行。使用3D TRUS前列腺分割方法将操作时间减至最短,可以为医生提供快速准确的前列腺分割,并实现高效的工作流程,提高患者通量,使患者能够更快地获得治疗。本研究的目的是开发一种基于深度学习的监督方法,用于分割来自不同机构的3D TRUS图像中的前列腺,这些图像是使用多种采集方法和商用超声机器模型生成的,以创建一种适用于基于针的前列腺癌手术的通用算法。
我们提出的3D分割方法包括对围绕前列腺近似中心轴径向采样的二维(2D)切片进行预测,然后重建为3D表面。对2D U-Net进行了修改、训练和验证,使用的图像来自临床活检和近距离放射治疗过程中采集的84幅端射式和122幅侧射式3D TRUS图像。对标准U-Net扩展部分的修改包括添加50%的随机失活,并使用转置卷积代替标准的上采样后接卷积,以分别减少过拟合和提高性能。手动轮廓提供了训练、验证和测试数据集所需的注释,测试数据集由20幅端射式和20幅侧射式未见的3D TRUS图像组成。由于用2D图像进行预测可能会丢失空间和结构信息,在研究了不同的损失函数后,与3D重建和优化的3D网络(包括3D V-Net、密集V-Net和高分辨率3D-Net)进行了比较。计算了一系列绝对误差和符号误差指标,包括像素图比较[骰子相似系数(DSC)、召回率和精确率]、体积百分比差异(VPD)、平均表面距离(MSD)和豪斯多夫距离(HD),以评估3D分割的准确性。
总体而言,我们提出的重建修改后的U-Net的中位[第一四分位数,第三四分位数]绝对DSC、召回率、精确率、VPD、MSD和HD分别为94.1 [92.6, 94.9]%、96.0 [93.1, 98.5]%、93.2 [88.8, 95.4]%、5.78 [2.49, 11.50]%、0.89 [0.73, 1.09] mm和2.89 [2.37, 4.35] mm。与性能最佳的优化3D网络(即具有骰子加交叉熵损失函数的3D V-Net)相比,我们提出的方法在几乎所有指标上都有显著改进。观察到每个前列腺的计算时间<0.7秒,这是一个足够短的分割时间,可用于术中实施。
我们提出的算法能够在可变的3D TRUS前列腺图像上提供快速准确的3D分割,为基于针的前列腺癌手术提供一种通用的术中解决方案。这种方法有可能减少手术时间,支持对基于针的3D TRUS方法日益增长的兴趣。