Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA 30322, United States of America.
Phys Med Biol. 2021 May 20;66(11). doi: 10.1088/1361-6560/abfce2.
Organ delineation is crucial to diagnosis and therapy, while it is also labor-intensive and observer-dependent. Dual energy CT (DECT) provides additional image contrast than conventional single energy CT (SECT), which may facilitate automatic organ segmentation. This work aims to develop an automatic multi-organ segmentation approach using deep learning for head-and-neck region on DECT. We proposed a mask scoring regional convolutional neural network (R-CNN) where comprehensive features are firstly learnt from two independent pyramid networks and are then combined via deep attention strategy to highlight the informative ones extracted from both two channels of low and high energy CT. To perform multi-organ segmentation and avoid misclassification, a mask scoring subnetwork was integrated into the Mask R-CNN framework to build the correlation between the class of potential detected organ's region-of-interest (ROI) and the shape of that organ's segmentation within that ROI. We evaluated our model on DECT images from 127 head-and-neck cancer patients (66 training, 61 testing) with manual contours of 19 organs as training target and ground truth. For large- and mid-sized organs such as brain and parotid, the proposed method successfully achieved average Dice similarity coefficient (DSC) larger than 0.8. For small-sized organs with very low contrast such as chiasm, cochlea, lens and optic nerves, the DSCs ranged between around 0.5 and 0.8. With the proposed method, using DECT images outperforms using SECT in almost all 19 organs with statistical significance in DSC (<0.05). Meanwhile, by using the DECT, the proposed method is also significantly superior to a recently developed FCN-based method in most of organs in terms of DSC and the 95th percentile Hausdorff distance. Quantitative results demonstrated the feasibility of the proposed method, the superiority of using DECT to SECT, and the advantage of the proposed R-CNN over FCN on the head-and-neck patient study. The proposed method has the potential to facilitate the current head-and-neck cancer radiation therapy workflow in treatment planning.
器官勾画对于诊断和治疗至关重要,但也是一项劳动密集型且依赖观察者的工作。与传统的单能量 CT(SECT)相比,双能 CT(DECT)提供了额外的图像对比度,这可能有助于自动器官分割。本研究旨在开发一种基于深度学习的用于头颈部 DECT 的自动多器官分割方法。我们提出了一种掩模评分区域卷积神经网络(R-CNN),该网络首先从两个独立的金字塔网络中学习全面的特征,然后通过深度注意力策略进行组合,以突出从低能和高能 CT 的两个通道中提取的有信息的特征。为了进行多器官分割并避免误分类,将掩模评分子网络集成到 Mask R-CNN 框架中,以建立潜在检测到的器官感兴趣区域(ROI)的类别与该 ROI 内该器官分割的形状之间的相关性。我们使用 127 名头颈部癌症患者的 DECT 图像(66 名训练,61 名测试)对我们的模型进行了评估,手动勾画了 19 个器官作为训练目标和真实目标。对于大脑和腮腺等大型和中型器官,该方法成功实现了平均 Dice 相似系数(DSC)大于 0.8。对于对比度非常低的小器官,如视交叉、耳蜗、晶状体和视神经,DSC 介于 0.5 到 0.8 之间。与 SECT 相比,使用 DECT 图像在几乎所有 19 个器官中都优于使用 SECT,在 DSC 方面具有统计学意义(<0.05)。同时,在所研究的头颈部患者中,与最近开发的基于 FCN 的方法相比,基于 R-CNN 的方法在大多数器官的 DSC 和第 95 百分位 Hausdorff 距离方面也具有显著优势。定量结果证明了所提出方法的可行性,使用 DECT 优于 SECT 的优势,以及 R-CNN 相对于 FCN 在头颈部患者研究中的优势。该方法有可能简化当前头颈部癌症放射治疗计划中的治疗流程。