Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA.
Department of Radiology and Imaging Sciences and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA.
Med Phys. 2019 Jul;46(7):3194-3206. doi: 10.1002/mp.13577. Epub 2019 May 29.
Transrectal ultrasound (TRUS) is a versatile and real-time imaging modality that is commonly used in image-guided prostate cancer interventions (e.g., biopsy and brachytherapy). Accurate segmentation of the prostate is key to biopsy needle placement, brachytherapy treatment planning, and motion management. Manual segmentation during these interventions is time-consuming and subject to inter- and intraobserver variation. To address these drawbacks, we aimed to develop a deep learning-based method which integrates deep supervision into a three-dimensional (3D) patch-based V-Net for prostate segmentation.
We developed a multidirectional deep-learning-based method to automatically segment the prostate for ultrasound-guided radiation therapy. A 3D supervision mechanism is integrated into the V-Net stages to deal with the optimization difficulties when training a deep network with limited training data. We combine a binary cross-entropy (BCE) loss and a batch-based Dice loss into the stage-wise hybrid loss function for a deep supervision training. During the segmentation stage, the patches are extracted from the newly acquired ultrasound image as the input of the well-trained network and the well-trained network adaptively labels the prostate tissue. The final segmented prostate volume is reconstructed using patch fusion and further refined through a contour refinement processing.
Forty-four patients' TRUS images were used to test our segmentation method. Our segmentation results were compared with the manually segmented contours (ground truth). The mean prostate volume Dice similarity coefficient (DSC), Hausdorff distance (HD), mean surface distance (MSD), and residual mean surface distance (RMSD) were 0.92 ± 0.03, 3.94 ± 1.55, 0.60 ± 0.23, and 0.90 ± 0.38 mm, respectively.
We developed a novel deeply supervised deep learning-based approach with reliable contour refinement to automatically segment the TRUS prostate, demonstrated its clinical feasibility, and validated its accuracy compared to manual segmentation. The proposed technique could be a useful tool for diagnostic and therapeutic applications in prostate cancer.
经直肠超声(TRUS)是一种多功能实时成像方式,常用于图像引导下的前列腺癌介入治疗(如活检和近距离放射治疗)。前列腺的准确分割是活检针放置、近距离放射治疗计划和运动管理的关键。在这些介入过程中手动分割既耗时又容易受到观察者间和观察者内的变化的影响。为了解决这些缺点,我们旨在开发一种基于深度学习的方法,该方法将深度监督集成到基于三维(3D)补丁的 V-Net 中,用于前列腺分割。
我们开发了一种基于多维深度学习的方法,用于自动分割超声引导放射治疗的前列腺。在 V-Net 阶段集成了 3D 监督机制,以解决在有限的训练数据下训练深度网络的优化困难。我们将二分类交叉熵(BCE)损失和基于批次的 Dice 损失结合到阶段混合损失函数中,用于深度监督训练。在分割阶段,从新采集的超声图像中提取补丁作为经过良好训练的网络的输入,并且经过良好训练的网络自适应地标记前列腺组织。使用补丁融合重建最终分割的前列腺体积,并通过轮廓细化处理进一步细化。
44 名患者的 TRUS 图像用于测试我们的分割方法。我们的分割结果与手动分割轮廓(金标准)进行了比较。前列腺体积的平均 Dice 相似系数(DSC)、Hausdorff 距离(HD)、平均表面距离(MSD)和残差平均表面距离(RMSD)分别为 0.92±0.03、3.94±1.55、0.60±0.23 和 0.90±0.38mm。
我们开发了一种具有可靠轮廓细化功能的新型深度监督深度学习方法,自动分割 TRUS 前列腺,与手动分割相比,验证了其临床可行性和准确性。该方法可用于前列腺癌的诊断和治疗应用。