Fu Jie, Singhrao Kamal, Zhong Xinran, Gao Yu, Qi Sharon X, Yang Yingli, Ruan Dan, Lewis John H
Department of Radiation Oncology, University of California, Los Angeles, Los Angeles, California.
Department of Radiation Oncology, Stanford University, Stanford, California.
Adv Radiat Oncol. 2021 Jul 1;6(5):100746. doi: 10.1016/j.adro.2021.100746. eCollection 2021 Sep-Oct.
Most radiomic studies use the features extracted from the manually drawn tumor contours for classification or survival prediction. However, large interobserver segmentation variations lead to inconsistent features and hence introduce more challenges in constructing robust prediction models. Here, we proposed an automatic workflow for glioblastoma (GBM) survival prediction based on multimodal magnetic resonance (MR) images.
Two hundred eighty-five patients with glioma (210 GBM, 75 low-grade glioma) were included. One hundred sixty-three of the patients with GBM had overall survival data. Every patient had 4 preoperative MR images and manually drawn tumor contours. A 3-dimensional convolutional neural network, VGG-Seg, was trained and validated using 122 patients with glioma for automatic GBM segmentation. The trained VGG-Seg was applied to the remaining 163 patients with GBM to generate their autosegmented tumor contours. The handcrafted and deep learning (DL)-based radiomic features were extracted from the autosegmented contours using explicitly designed algorithms and a pretrained convolutional neural network, respectively. One hundred sixty-three patients with GBM were randomly split into training (n = 122) and testing (n = 41) sets for survival analysis. Cox regression models were trained to construct the handcrafted and DL-based signatures. The prognostic powers of the 2 signatures were evaluated and compared.
The VGG-Seg achieved a mean Dice coefficient of 0.86 across 163 patients with GBM for GBM segmentation. The handcrafted signature achieved a C-index of 0.64 (95% confidence interval, 0.55-0.73), whereas the DL-based signature achieved a C-index of 0.67 (95% confidence interval, 0.57-0.77). Unlike the handcrafted signature, the DL-based signature successfully stratified testing patients into 2 prognostically distinct groups.
The VGG-Seg generated accurate GBM contours from 4 MR images. The DL-based signature achieved a numerically higher C-index than the handcrafted signature and significant patient stratification. The proposed automatic workflow demonstrated the potential of improving patient stratification and survival prediction in patients with GBM.
大多数放射组学研究使用从手动绘制的肿瘤轮廓中提取的特征进行分类或生存预测。然而,观察者之间的分割差异较大,导致特征不一致,从而在构建稳健的预测模型时带来更多挑战。在此,我们提出了一种基于多模态磁共振(MR)图像的胶质母细胞瘤(GBM)生存预测自动工作流程。
纳入了285例胶质瘤患者(210例GBM,75例低级别胶质瘤)。其中163例GBM患者有总生存数据。每位患者有4张术前MR图像和手动绘制的肿瘤轮廓。使用122例胶质瘤患者训练并验证了一个三维卷积神经网络VGG-Seg,用于GBM的自动分割。将训练好的VGG-Seg应用于其余163例GBM患者,以生成自动分割的肿瘤轮廓。分别使用明确设计的算法和预训练的卷积神经网络从自动分割的轮廓中提取手工制作的和基于深度学习(DL)的放射组学特征。将163例GBM患者随机分为训练组(n = 122)和测试组(n = 41)进行生存分析。训练Cox回归模型以构建基于手工制作的和基于DL的特征。评估并比较了这两种特征的预后能力。
VGG-Seg在163例GBM患者的GBM分割中平均Dice系数达到0.86。基于手工制作的特征C指数为0.64(95%置信区间,0.55 - 0.73),而基于DL的特征C指数为0.67(95%置信区间,0.57 - 0.77)。与基于手工制作的特征不同,基于DL的特征成功地将测试患者分为两个预后明显不同的组。
VGG-Seg从4张MR图像生成了准确的GBM轮廓。基于DL的特征在数值上比基于手工制作的特征具有更高的C指数和显著的患者分层。所提出的自动工作流程展示了改善GBM患者分层和生存预测的潜力。