Sarkar Sagnik, Teo P Troy, Abazeed Mohamed E
Department of Radiation Oncology, Feinberg School of Medicine, Northwestern University, Chicago, IL, USA.
Robert H. Lurie Cancer Center, Northwestern University, Chicago, IL, USA.
NPJ Precis Oncol. 2025 Jun 30;9(1):173. doi: 10.1038/s41698-025-00970-1.
Accurate tumor delineation is foundational to radiotherapy. In the era of deep learning, the automation of this labor-intensive and variation-prone process is increasingly tractable. We developed a deep neural network model to segment gross tumor volumes (GTVs) in the lung and propagate them across 4D CT images to generate an internal target volume (ITV), capturing tumor motion during respiration. Using a multicenter cohort-based registry from 9 clinics across 2 health systems, we trained a 3D UNet model (iSeg) on pre-treatment CT images and corresponding GTV masks (n = 739, 5-fold cross-validation) and validated it on two independent cohorts (n = 161; n = 102). The internal cohort achieved a median Dice (DSC) of 0.73 [IQR: 0.62-0.80], with comparable performance in external cohorts (DSC = 0.70 [0.52-0.78] and 0.71 [0.59-79]), indicating multi-site validation. iSeg matched human inter-observer variability and was robust to image quality and tumor motion (DSC = 0.77 [0.68-0.86]). Machine-generated ITVs were significantly smaller than physician delineated contours (p < 0.0001), indicating more precise delineation. Notably, higher false positive voxel rate (regions segmented by the machine but not the human) were associated with increased local failure (HR: 1.01 per voxel, p = 0.03), suggesting the clinical relevance of these discordant regions. These results mark a leap in automated target volume segmentation and suggest that machine delineation can enhance the accuracy, reproducibility, and efficiency of this core task in radiotherapy.
准确的肿瘤轮廓勾画是放射治疗的基础。在深度学习时代,这个劳动强度大且容易出现差异的过程的自动化变得越来越可行。我们开发了一种深度神经网络模型,用于分割肺部的大体肿瘤体积(GTV),并将其在4D CT图像上进行传播,以生成内部靶区体积(ITV),捕捉呼吸过程中的肿瘤运动。利用来自2个医疗系统中9家诊所的基于多中心队列的注册数据,我们在治疗前的CT图像和相应的GTV掩码(n = 739,5折交叉验证)上训练了一个3D UNet模型(iSeg),并在两个独立队列(n = 161;n = 102)上进行了验证。内部队列的中位骰子系数(DSC)为0.73 [四分位距:0.62 - 0.80],外部队列的表现与之相当(DSC = 0.70 [0.52 - 0.78]和0.71 [0.59 - 79]),表明该模型具有多中心验证性。iSeg与人类观察者间的变异性相当,并且对图像质量和肿瘤运动具有鲁棒性(DSC = 0.77 [0.68 - 0.86])。机器生成的ITV明显小于医生勾画的轮廓(p < 0.0001),表明勾画更精确。值得注意的是,较高的假阳性体素率(机器分割但人类未分割的区域)与局部失败增加相关(风险比:每体素1.01,p = 0.03),这表明这些不一致区域具有临床相关性。这些结果标志着自动靶区体积分割取得了飞跃,并表明机器勾画可以提高放射治疗中这一核心任务的准确性、可重复性和效率。