Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA.
Department of Undeclared Engineering, University of California, Berkeley, CA, 94720, USA.
Med Phys. 2019 May;46(5):2157-2168. doi: 10.1002/mp.13458. Epub 2019 Mar 22.
Accurate and timely organs-at-risk (OARs) segmentation is key to efficient and high-quality radiation therapy planning. The purpose of this work is to develop a deep learning-based method to automatically segment multiple thoracic OARs on chest computed tomography (CT) for radiotherapy treatment planning.
We propose an adversarial training strategy to train deep neural networks for the segmentation of multiple organs on thoracic CT images. The proposed design of adversarial networks, called U-Net-generative adversarial network (U-Net-GAN), jointly trains a set of U-Nets as generators and fully convolutional networks (FCNs) as discriminators. Specifically, the generator, composed of U-Net, produces an image segmentation map of multiple organs by an end-to-end mapping learned from CT image to multiorgan-segmented OARs. The discriminator, structured as an FCN, discriminates between the ground truth and segmented OARs produced by the generator. The generator and discriminator compete against each other in an adversarial learning process to produce the optimal segmentation map of multiple organs. Our segmentation results were compared with manually segmented OARs (ground truth) for quantitative evaluations in geometric difference, as well as dosimetric performance by investigating the dose-volume histogram in 20 stereotactic body radiation therapy (SBRT) lung plans.
This segmentation technique was applied to delineate the left and right lungs, spinal cord, esophagus, and heart using 35 patients' chest CTs. The averaged dice similarity coefficient for the above five OARs are 0.97, 0.97, 0.90, 0.75, and 0.87, respectively. The mean surface distance of the five OARs obtained with proposed method ranges between 0.4 and 1.5 mm on average among all 35 patients. The mean dose differences on the 20 SBRT lung plans are -0.001 to 0.155 Gy for the five OARs.
We have investigated a novel deep learning-based approach with a GAN strategy to segment multiple OARs in the thorax using chest CT images and demonstrated its feasibility and reliability. This is a potentially valuable method for improving the efficiency of chest radiotherapy treatment planning.
准确、及时的危及器官(OAR)分割是高效、高质量放射治疗计划的关键。本研究旨在开发一种基于深度学习的方法,以便在胸部 CT 上自动分割多个胸部 OAR,用于放射治疗计划。
我们提出了一种对抗训练策略,用于训练用于胸部 CT 图像中多个器官分割的深度神经网络。所提出的对抗网络设计,称为 U-Net-生成对抗网络(U-Net-GAN),联合训练一组 U-Nets 作为生成器和全卷积网络(FCNs)作为鉴别器。具体来说,生成器由 U-Net 组成,通过从 CT 图像到多器官分割 OAR 的端到端映射学习,生成多个器官的图像分割图。鉴别器采用 FCN 结构,对由生成器生成的真实和分割 OAR 进行鉴别。生成器和鉴别器在对抗学习过程中相互竞争,以生成最佳的多器官分割图。我们的分割结果与手动分割的 OAR(真实值)进行了定量评估,比较了几何差异,以及通过调查 20 例立体定向体放射治疗(SBRT)肺计划的剂量 - 体积直方图来评估剂量学性能。
该分割技术应用于 35 例胸部 CT 来勾画左、右肺、脊髓、食管和心脏。上述五个 OAR 的平均骰子相似系数分别为 0.97、0.97、0.90、0.75 和 0.87。在所有 35 例患者中,使用所提出的方法获得的五个 OAR 的平均表面距离在 0.4 到 1.5 毫米之间。在 20 例 SBRT 肺计划中,五个 OAR 的平均剂量差异在-0.001 到 0.155 戈瑞之间。
我们研究了一种新的基于深度学习的方法,该方法采用 GAN 策略,使用胸部 CT 图像分割胸部的多个 OAR,并证明了其可行性和可靠性。这是一种提高胸部放射治疗计划效率的有潜力的方法。