Weston Alexander D, Korfiatis Panagiotis, Philbrick Kenneth A, Conte Gian Marco, Kostandy Petro, Sakinis Thomas, Zeinoddini Atefeh, Boonrod Arunnit, Moynagh Michael, Takahashi Naoki, Erickson Bradley J
Health Sciences Research, Mayo Clinic, 4500 San Pablo Road S, Jacksonville, FL, 32250, USA.
Department of Radiology, Mayo Clinic, 200 First St SW, Rochester, MN, 55905, USA.
Med Phys. 2020 Nov;47(11):5609-5618. doi: 10.1002/mp.14422. Epub 2020 Oct 7.
Organ segmentation of computed tomography (CT) imaging is essential for radiotherapy treatment planning. Treatment planning requires segmentation not only of the affected tissue, but nearby healthy organs-at-risk, which is laborious and time-consuming. We present a fully automated segmentation method based on the three-dimensional (3D) U-Net convolutional neural network (CNN) capable of whole abdomen and pelvis segmentation into 33 unique organ and tissue structures, including tissues that may be overlooked by other automated segmentation approaches such as adipose tissue, skeletal muscle, and connective tissue and vessels. Whole abdomen segmentation is capable of quantifying exposure beyond a handful of organs-at-risk to all tissues within the abdomen.
Sixty-six (66) CT examinations of 64 individuals were included in the training and validation sets and 18 CT examinations from 16 individuals were included in the test set. All pixels in each examination were segmented by image analysts (with physician correction) and assigned one of 33 labels. Segmentation was performed with a 3D U-Net variant architecture which included residual blocks, and model performance was quantified on 18 test cases. Human interobserver variability (using semiautomated segmentation) was also reported on two scans, and manual interobserver variability of three individuals was reported on one scan. Model performance was also compared to several of the best models reported in the literature for multiple organ segmentation.
The accuracy of the 3D U-Net model ranges from a Dice coefficient of 0.95 in the liver, 0.93 in the kidneys, 0.79 in the pancreas, 0.69 in the adrenals, and 0.51 in the renal arteries. Model accuracy is within 5% of human segmentation in eight of 19 organs and within 10% accuracy in 13 of 19 organs.
The CNN approaches the accuracy of human tracers and on certain complex organs displays more consistent prediction than human tracers. Fully automated deep learning-based segmentation of CT abdomen has the potential to improve both the speed and accuracy of radiotherapy dose prediction for organs-at-risk.
计算机断层扫描(CT)成像的器官分割对于放射治疗计划至关重要。治疗计划不仅需要对受影响的组织进行分割,还需要对附近有风险的健康器官进行分割,这既费力又耗时。我们提出了一种基于三维(3D)U-Net卷积神经网络(CNN)的全自动分割方法,该方法能够将整个腹部和骨盆分割为33个独特的器官和组织结构,包括其他自动分割方法可能忽略的组织,如脂肪组织、骨骼肌、结缔组织和血管。全腹部分割能够量化除少数有风险器官之外的腹部所有组织的暴露情况。
64名个体的66次CT检查被纳入训练集和验证集,16名个体的18次CT检查被纳入测试集。每次检查中的所有像素均由图像分析人员进行分割(并由医生校正),并分配33种标签之一。使用包含残差块的3D U-Net变体架构进行分割,并在18个测试病例上对模型性能进行量化。还报告了两次扫描的人类观察者间变异性(使用半自动分割),以及一次扫描中三名个体的手动观察者间变异性。还将模型性能与文献中报道的多个器官分割的几种最佳模型进行了比较。
3D U-Net模型的准确率范围为:肝脏的Dice系数为0.95,肾脏为0.93,胰腺为0.79,肾上腺为0.69,肾动脉为0.51。在19个器官中的8个器官中,模型准确率与人工分割的准确率相差在5%以内,在19个器官中的13个器官中,准确率相差在10%以内。
CNN接近人工追踪的准确率,并且在某些复杂器官上显示出比人工追踪更一致的预测。基于深度学习的CT腹部全自动分割有潜力提高有风险器官放射治疗剂量预测的速度和准确性。