Wan Qi, Lindsay Clifford, Zhang Chenxi, Kim Jisoo, Chen Xin, Li Jing, Huang Raymond Y, Reardon David A, Young Geoffrey S, Qin Lei
Department of Radiology, the Key Laboratory of Advanced Interdisciplinary Studies Center, the First Affiliated Hospital of Guangzhou Medical University, Guangzhou, China.
Department of Imaging, Dana-Farber Cancer Institute, Harvard Medical School, Boston, MA, USA.
Cancer Imaging. 2025 Jan 21;25(1):5. doi: 10.1186/s40644-024-00818-0.
Radiomic analysis of quantitative features extracted from segmented medical images can be used for predictive modeling of prognosis in brain tumor patients. Manual segmentation of the tumor components is time-consuming and poses significant reproducibility issues. We compare the prediction of overall survival (OS) in recurrent high-grade glioma(HGG) patients undergoing immunotherapy, using deep learning (DL) classification networks along with radiomic signatures derived from manual and convolutional neural networks (CNN) automated segmentation.
We retrospectively retrieved 154 cases of recurrent HGG from multiple centers. Tumor segmentation was performed by expert radiologists and a convolutional neural network (CNN). From the segmented tumors, 2553 radiomic features were extracted for each case. A robust feature subset was selected using intraclass correlation coefficient analysis between manual and automated segmentations. The data was divided into a 9:1 ratio and validated through ten-fold cross-validation and tested on a rotating test set. Features selection was done by the Kruskal-Wallis test. The Radiomics-based OS predictions, generated using Support Vector Machine (SVM), were compared between the two segmentation approaches and against OS prediction by the CNN model adapted for classification. Model efficacy was evaluated using the area under the receiver operating characteristic curve (AUC).
The clinical model AUC for OS prediction was 0.640 ± 0.013 (mean ± 95% confidence interval) in the training set and 0.610 ± 0.131 in the test set. The radiomics prediction of OS based on manual segmentation outperformed automatic segmentation (AUC of 0.662 ± 0.122 vs. 0.471 ± 0.086, respectively) in the test set. Robust features improved the performance of manual segmentation to AUC of 0.700 ± 0.102, of automated segmentation to 0.554 ± 0.085. The CNN prognosis model demonstrated promising results, with an average AUC of 0.755 ± 0.071 for training sets and 0.700 ± 0.101 for the test set.
Manual segmentation-derived radiomic features outperformed automated segmentation-derived features for predicting OS in recurrent high-grade glioma patients undergoing immunotherapy. The end-to-end CNN prognosis model performed similarly to radiomics modeling using manual-segmentation-derived features without the need for segmentation. The potential time-saving must be weighed against the lower interpretability of end-to-end black box modeling.
从分割后的医学图像中提取的定量特征的放射组学分析可用于脑肿瘤患者预后的预测建模。肿瘤成分的手动分割既耗时又存在显著的可重复性问题。我们比较了接受免疫治疗的复发性高级别胶质瘤(HGG)患者的总生存期(OS)预测情况,使用深度学习(DL)分类网络以及源自手动和卷积神经网络(CNN)自动分割的放射组学特征。
我们从多个中心回顾性检索了154例复发性HGG病例。肿瘤分割由专业放射科医生和卷积神经网络(CNN)进行。从分割后的肿瘤中,为每个病例提取2553个放射组学特征。使用手动和自动分割之间的组内相关系数分析选择稳健的特征子集。数据按9:1的比例划分,并通过十折交叉验证进行验证,并在旋转测试集上进行测试。特征选择通过Kruskal-Wallis检验完成。使用支持向量机(SVM)生成的基于放射组学的OS预测在两种分割方法之间进行比较,并与适用于分类的CNN模型的OS预测进行比较。使用受试者操作特征曲线(AUC)下的面积评估模型效能。
训练集中OS预测的临床模型AUC为0.640±0.013(平均值±95%置信区间),测试集中为0.610±0.131。在测试集中,基于手动分割的OS放射组学预测优于自动分割(AUC分别为0.662±0.122和0.471±0.086)。稳健特征将手动分割的性能提高到AUC为0.700±0.102,自动分割提高到0.554±0.085。CNN预后模型显示出有前景的结果,训练集的平均AUC为0.755±0.071,测试集为0.700±0.101。
对于接受免疫治疗的复发性高级别胶质瘤患者,手动分割衍生的放射组学特征在预测OS方面优于自动分割衍生的特征。端到端的CNN预后模型的表现与使用手动分割衍生特征的放射组学建模相似,无需进行分割。潜在的省时优势必须与端到端黑箱建模较低的可解释性相权衡。