Department of Radiology, University of Michigan, Ann Arbor, MI, 48109, USA.
School of Data and Computer Science, Sun Yat-Sen University, Guangzhou, 510275, P.R. China.
Med Phys. 2019 Apr;46(4):1752-1765. doi: 10.1002/mp.13438. Epub 2019 Feb 28.
To develop a U-Net-based deep learning approach (U-DL) for bladder segmentation in computed tomography urography (CTU) as a part of a computer-assisted bladder cancer detection and treatment response assessment pipeline.
A dataset of 173 cases including 81 cases in the training/validation set (42 masses, 21 with wall thickening, 18 normal bladders), and 92 cases in the test set (43 masses, 36 with wall thickening, 13 normal bladders) were used with Institutional Review Board approval. An experienced radiologist provided three-dimensional (3D) hand outlines for all cases as the reference standard. We previously developed a bladder segmentation method that used a deep learning convolution neural network and level sets (DCNN-LS) within a user-input bounding box. However, some cases with poor image quality or with advanced bladder cancer spreading into the neighboring organs caused inaccurate segmentation. We have newly developed an automated U-DL method to estimate a likelihood map of the bladder in CTU. The U-DL did not require a user-input box and the level sets for postprocessing. To identify the best model for this task, we compared the following models: (a) two-dimensional (2D) U-DL and 3D U-DL using 2D CT slices and 3D CT volumes, respectively, as input, (b) U-DLs using CT images of different resolutions as input, and (c) U-DLs with and without automated cropping of the bladder as an image preprocessing step. The segmentation accuracy relative to the reference standard was quantified by six measures: average volume intersection ratio (AVI), average percent volume error (AVE), average absolute volume error (AAVE), average minimum distance (AMD), average Hausdorff distance (AHD), and the average Jaccard index (AJI). As a baseline, the results from our previous DCNN-LS method were used.
In the test set, the best 2D U-DL model achieved AVI, AVE, AAVE, AMD, AHD, and AJI values of 93.4 ± 9.5%, -4.2 ± 14.2%, 9.2 ± 11.5%, 2.7 ± 2.5 mm, 9.7 ± 7.6 mm, 85.0 ± 11.3%, respectively, while the corresponding measures by the best 3D U-DL were 90.6 ± 11.9%, -2.3 ± 21.7%, 11.5 ± 18.5%, 3.1 ± 3.2 mm, 11.4 ± 10.0 mm, and 82.6 ± 14.2%, respectively. For comparison, the corresponding values obtained with the baseline method were 81.9 ± 12.1%, 10.2 ± 16.2%, 14.0 ± 13.0%, 3.6 ± 2.0 mm, 12.8 ± 6.1 mm, and 76.2 ± 11.8%, respectively, for the same test set. The improvement for all measures between the best U-DL and the DCNN-LS were statistically significant (P < 0.001).
Compared to a previous DCNN-LS method, which depended on a user-input bounding box, the U-DL provided more accurate bladder segmentation and was more automated than the previous approach.
开发一种基于 U-Net 的深度学习方法(U-DL),用于在计算机断层尿路造影(CTU)中进行膀胱分割,作为计算机辅助膀胱癌检测和治疗反应评估流程的一部分。
该研究使用了经机构审查委员会批准的 173 例病例数据集,其中包括 81 例训练/验证集病例(42 个肿块,21 个伴有壁增厚,18 个正常膀胱)和 92 例测试集病例(43 个肿块,36 个伴有壁增厚,13 个正常膀胱)。一位经验丰富的放射科医生为所有病例提供了三维(3D)手轮廓作为参考标准。我们之前开发了一种使用深度学习卷积神经网络和水平集(DCNN-LS)的膀胱分割方法,该方法在用户输入的边界框内使用。然而,一些图像质量较差或膀胱癌晚期扩散到邻近器官的病例导致分割不准确。我们新开发了一种自动 U-DL 方法来估计 CTU 中膀胱的可能性图。U-DL 不需要用户输入框和用于后处理的水平集。为了确定最适合此任务的模型,我们比较了以下模型:(a)分别使用二维(2D)CT 切片和 3D CT 容积作为输入的 2D 和 3D U-DL;(b)使用不同分辨率的 CT 图像作为输入的 U-DLs;(c)具有和不具有膀胱自动裁剪作为图像预处理步骤的 U-DLs。使用相对于参考标准的六个指标来量化分割准确性:平均体积交集比(AVI)、平均百分比体积误差(AVE)、平均绝对体积误差(AAVE)、平均最小距离(AMD)、平均 Hausdorff 距离(AHD)和平均 Jaccard 指数(AJI)。作为基线,使用了我们之前的 DCNN-LS 方法的结果。
在测试集中,最佳的 2D U-DL 模型实现了 93.4±9.5%、-4.2±14.2%、9.2±11.5%、2.7±2.5mm、9.7±7.6mm 和 85.0±11.3%的 AVI、AVE、AAVE、AMD、AHD 和 AJI 值,而最佳 3D U-DL 的相应测量值分别为 90.6±11.9%、-2.3±21.7%、11.5±18.5%、3.1±3.2mm、11.4±10.0mm 和 82.6±14.2%。相比之下,基线方法在同一测试集中的相应值分别为 81.9±12.1%、10.2±16.2%、14.0±13.0%、3.6±2.0mm、12.8±6.1mm 和 76.2±11.8%。最佳 U-DL 和 DCNN-LS 之间所有指标的改善均具有统计学意义(P<0.001)。
与以前依赖于用户输入边界框的 DCNN-LS 方法相比,U-DL 提供了更准确的膀胱分割,并且比以前的方法更自动化。