Yang Kaifan, Dong Xiuyu, Tang Fan, Ye Feng, Chen Bei, Liang Shujun, Zhang Yu, Xu Yikai
Department of Medical Imaging Center, Nanfang Hospital, Southern Medical University, Guangzhou, Guangdong, China.
School of Biomedical Engineering, Southern Medical University, Guangzhou, Guangdong, China.
Front Oncol. 2024 Jun 14;14:1377366. doi: 10.3389/fonc.2024.1377366. eCollection 2024.
Accurate tumor target contouring and T staging are vital for precision radiation therapy in nasopharyngeal carcinoma (NPC). Identifying T-stage and contouring the Gross tumor volume (GTV) manually is a laborious and highly time-consuming process. Previous deep learning-based studies have mainly been focused on tumor segmentation, and few studies have specifically addressed the tumor staging of NPC.
To bridge this gap, we aim to devise a model that can simultaneously identify T-stage and perform accurate segmentation of GTV in NPC.
We have developed a transformer-based multi-task deep learning model that can perform two tasks simultaneously: delineating the tumor contour and identifying T-stage. Our retrospective study involved contrast-enhanced T1-weighted images (CE-T1WI) of 320 NPC patients (T-stage: T1-T4) collected between 2017 and 2020 at our institution, which were randomly allocated into three cohorts for three-fold cross-validations, and conducted the external validation using an independent test set. We evaluated the predictive performance using the area under the receiver operating characteristic curve (ROC-AUC) and accuracy (ACC), with a 95% confidence interval (CI), and the contouring performance using the Dice similarity coefficient (DSC) and average surface distance (ASD).
Our multi-task model exhibited sound performance in GTV contouring (median DSC: 0.74; ASD: 0.97 mm) and T staging (AUC: 0.85, 95% CI: 0.82-0.87) across 320 patients. In early T category tumors, the model achieved a median DSC of 0.74 and ASD of 0.98 mm, while in advanced T category tumors, it reached a median DSC of 0.74 and ASD of 0.96 mm. The accuracy of automated T staging was 76% (126 of 166) for early stages (T1-T2) and 64% (99 of 154) for advanced stages (T3-T4). Moreover, experimental results show that our multi-task model outperformed the other single-task models.
This study emphasized the potential of multi-task model for simultaneously delineating the tumor contour and identifying T-stage. The multi-task model harnesses the synergy between these interrelated learning tasks, leading to improvements in the performance of both tasks. The performance demonstrates the potential of our work for delineating the tumor contour and identifying T-stage and suggests that it can be a practical tool for supporting clinical precision radiation therapy.
准确的肿瘤靶区勾画和T分期对于鼻咽癌(NPC)的精确放射治疗至关重要。手动识别T分期并勾画大体肿瘤体积(GTV)是一项费力且耗时的过程。以往基于深度学习的研究主要集中在肿瘤分割上,很少有研究专门针对NPC的肿瘤分期。
为弥补这一差距,我们旨在设计一种模型,该模型能够同时识别NPC的T分期并对GTV进行精确分割。
我们开发了一种基于Transformer的多任务深度学习模型,该模型可以同时执行两项任务:勾勒肿瘤轮廓和识别T分期。我们的回顾性研究纳入了2017年至2020年在我院收集的320例NPC患者(T分期:T1 - T4)的对比增强T1加权图像(CE - T1WI),这些图像被随机分为三个队列进行三重交叉验证,并使用独立测试集进行外部验证。我们使用受试者操作特征曲线下面积(ROC - AUC)和准确率(ACC)以及95%置信区间(CI)评估预测性能,并使用Dice相似系数(DSC)和平均表面距离(ASD)评估勾画性能。
我们的多任务模型在320例患者的GTV勾画(中位DSC:0.74;ASD:0.97mm)和T分期(AUC:0.85,95%CI:0.82 - 0.87)方面表现良好。在早期T类肿瘤中,该模型的中位DSC为0.74,ASD为0.98mm,而在晚期T类肿瘤中,中位DSC为0.74,ASD为0.96mm。早期(T1 - T2)自动T分期的准确率为76%(166例中的126例),晚期(T3 - T4)为64%(154例中的99例)。此外,实验结果表明我们的多任务模型优于其他单任务模型。
本研究强调了多任务模型在同时勾勒肿瘤轮廓和识别T分期方面的潜力。该多任务模型利用了这些相互关联的学习任务之间的协同作用,从而提高了两项任务的性能。该性能证明了我们在勾勒肿瘤轮廓和识别T分期方面工作的潜力,并表明它可以成为支持临床精确放射治疗的实用工具。