Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway.
Phys Med Biol. 2021 Mar 4;66(6):065012. doi: 10.1088/1361-6560/abe553.
Target volume delineation is a vital but time-consuming and challenging part of radiotherapy, where the goal is to deliver sufficient dose to the target while reducing risks of side effects. For head and neck cancer (HNC) this is complicated by the complex anatomy of the head and neck region and the proximity of target volumes to organs at risk. The purpose of this study was to compare and evaluate conventional PET thresholding methods, six classical machine learning algorithms and a 2D U-Net convolutional neural network (CNN) for automatic gross tumor volume (GTV) segmentation of HNC in PET/CT images. For the latter two approaches the impact of single versus multimodality input on segmentation quality was also assessed. 197 patients were included in the study. The cohort was split into training and test sets (157 and 40 patients, respectively). Five-fold cross-validation was used on the training set for model comparison and selection. Manual GTV delineations represented the ground truth. Tresholding, classical machine learning and CNN segmentation models were ranked separately according to the cross-validation Sørensen-Dice similarity coefficient (Dice). PET thresholding gave a maximum mean Dice of 0.62, whereas classical machine learning resulted in maximum mean Dice scores of 0.24 (CT) and 0.66 (PET; PET/CT). CNN models obtained maximum mean Dice scores of 0.66 (CT), 0.68 (PET) and 0.74 (PET/CT). The difference in cross-validation Dice between multimodality PET/CT and single modality CNN models was significant (p ≤ 0.0001). The top-ranked PET/CT-based CNN model outperformed the best-performing thresholding and classical machine learning models, giving significantly better segmentations in terms of cross-validation and test set Dice, true positive rate, positive predictive value and surface distance-based metrics (p ≤ 0.0001). Thus, deep learning based on multimodality PET/CT input resulted in superior target coverage and less inclusion of surrounding normal tissue.
靶区勾画是放射治疗中非常重要但又耗时且具有挑战性的部分,其目标是将足够的剂量递送到靶区,同时降低副作用的风险。对于头颈部癌症(HNC),由于头颈部区域的复杂解剖结构以及靶区与危及器官的接近程度,这变得更加复杂。本研究旨在比较和评估传统的 PET 阈值方法、六种经典机器学习算法和二维 U-Net 卷积神经网络(CNN)对头颈部癌症在 PET/CT 图像中的自动大体肿瘤体积(GTV)分割。对于后两种方法,还评估了单模态与多模态输入对分割质量的影响。197 名患者纳入本研究。该队列分为训练集和测试集(分别为 157 名和 40 名患者)。在训练集上使用五折交叉验证进行模型比较和选择。手动 GTV 勾画作为金标准。分别根据交叉验证 Sørensen-Dice 相似系数(Dice)对阈值、经典机器学习和 CNN 分割模型进行排名。PET 阈值的最大平均 Dice 为 0.62,而经典机器学习的最大平均 Dice 分别为 0.24(CT)和 0.66(PET;PET/CT)。CNN 模型获得的最大平均 Dice 分别为 0.66(CT)、0.68(PET)和 0.74(PET/CT)。多模态 PET/CT 和单模态 CNN 模型的交叉验证 Dice 之间的差异具有统计学意义(p≤0.0001)。排名最高的基于 PET/CT 的 CNN 模型优于表现最好的阈值和经典机器学习模型,在交叉验证和测试集 Dice、真阳性率、阳性预测值和基于表面距离的度量方面提供了更好的分割(p≤0.0001)。因此,基于多模态 PET/CT 输入的深度学习导致了更好的靶区覆盖和更少的周围正常组织纳入。