Key Lab of Intelligent Perception and Image Understanding of Ministry of Education, Xidian University, Xi'an, Shaanxi, 710071, China.
Department of Radiation Oncology, University of California-Los Angeles, Los Angeles, CA, 90095, USA.
Med Phys. 2018 Oct;45(10):4558-4567. doi: 10.1002/mp.13147. Epub 2018 Sep 19.
Intensity modulated radiation therapy (IMRT) is commonly employed for treating head and neck (H&N) cancer with uniform tumor dose and conformal critical organ sparing. Accurate delineation of organs-at-risk (OARs) on H&N CT images is thus essential to treatment quality. Manual contouring used in current clinical practice is tedious, time-consuming, and can produce inconsistent results. Existing automated segmentation methods are challenged by the substantial inter-patient anatomical variation and low CT soft tissue contrast. To overcome the challenges, we developed a novel automated H&N OARs segmentation method that combines a fully convolutional neural network (FCNN) with a shape representation model (SRM).
Based on manually segmented H&N CT, the SRM and FCNN were trained in two steps: (a) SRM learned the latent shape representation of H&N OARs from the training dataset; (b) the pre-trained SRM with fixed parameters were used to constrain the FCNN training. The combined segmentation network was then used to delineate nine OARs including the brainstem, optic chiasm, mandible, optical nerves, parotids, and submandibular glands on unseen H&N CT images. Twenty-two and 10 H&N CT scans provided by the Public Domain Database for Computational Anatomy (PDDCA) were utilized for training and validation, respectively. Dice similarity coefficient (DSC), positive predictive value (PPV), sensitivity (SEN), average surface distance (ASD), and 95% maximum surface distance (95%SD) were calculated to quantitatively evaluate the segmentation accuracy of the proposed method. The proposed method was compared with an active appearance model that won the 2015 MICCAI H&N Segmentation Grand Challenge based on the same dataset, an atlas method and a deep learning method based on different patient datasets.
An average DSC = 0.870 (brainstem), DSC = 0.583 (optic chiasm), DSC = 0.937 (mandible), DSC = 0.653 (left optic nerve), DSC = 0.689 (right optic nerve), DSC = 0.835 (left parotid), DSC = 0.832 (right parotid), DSC = 0.755 (left submandibular), and DSC = 0.813 (right submandibular) were achieved. The segmentation results are consistently superior to the results of atlas and statistical shape based methods as well as a patch-wise convolutional neural network method. Once the networks are trained off-line, the average time to segment all 9 OARs for an unseen CT scan is 9.5 s.
Experiments on clinical datasets of H&N patients demonstrated the effectiveness of the proposed deep neural network segmentation method for multi-organ segmentation on volumetric CT scans. The accuracy and robustness of the segmentation were further increased by incorporating shape priors using SMR. The proposed method showed competitive performance and took shorter time to segment multiple organs in comparison to state of the art methods.
调强放射治疗(IMRT)常用于治疗头颈部(H&N)癌症,以实现肿瘤剂量均匀和关键器官的适形保护。因此,准确勾画 H&N CT 图像中的危及器官(OARs)对于治疗质量至关重要。目前临床实践中使用的手动轮廓绘制既繁琐又耗时,并且可能产生不一致的结果。现有的自动化分割方法受到患者间解剖结构变化大和 CT 软组织对比度低的挑战。为了克服这些挑战,我们开发了一种新的自动 H&N OAR 分割方法,该方法结合了全卷积神经网络(FCNN)和形状表示模型(SRM)。
基于手动分割的 H&N CT,SRM 和 FCNN 分两步进行训练:(a)SRM 从训练数据集学习 H&N OAR 的潜在形状表示;(b)使用固定参数的预训练 SRM 来约束 FCNN 训练。然后,将组合分割网络用于在未见的 H&N CT 图像上勾画九个 OAR,包括脑干、视交叉、下颌骨、视神经、腮腺和颌下腺。利用公共域解剖计算数据库(PDDCA)提供的 22 个和 10 个 H&N CT 扫描分别用于训练和验证。计算 Dice 相似系数(DSC)、阳性预测值(PPV)、灵敏度(SEN)、平均表面距离(ASD)和 95%最大表面距离(95%SD)来定量评估所提出方法的分割准确性。将所提出的方法与在同一数据集上赢得 2015 年 MICCAI H&N 分割大挑战的主动外观模型、基于图谱的方法和基于不同患者数据集的深度学习方法进行了比较。
平均 DSC = 0.870(脑干)、DSC = 0.583(视交叉)、DSC = 0.937(下颌骨)、DSC = 0.653(左侧视神经)、DSC = 0.689(右侧视神经)、DSC = 0.835(左侧腮腺)、DSC = 0.832(右侧腮腺)、DSC = 0.755(左侧颌下腺)和 DSC = 0.813(右侧颌下腺)。分割结果始终优于图谱和基于统计形状的方法以及基于补丁的卷积神经网络方法的结果。一旦网络离线训练完成,为一个未见的 CT 扫描分割所有 9 个 OAR 的平均时间为 9.5 秒。
对头颈部患者临床数据集的实验表明,所提出的深度神经网络分割方法对头颈部容积 CT 扫描的多器官分割具有有效性。通过使用 SMR 结合形状先验知识,进一步提高了分割的准确性和稳健性。与最先进的方法相比,该方法具有竞争力的性能,并且在分割多个器官时花费的时间更短。