Li Hongming, Boimel Pamela, Janopaul-Naylor James, Zhong Haoyu, Xiao Ying, Ben-Josef Edgar, Fan Yong
Department of Radiology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA.
Department of Radiation Oncology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, 19104, USA.
Proc IEEE Int Symp Biomed Imaging. 2019 Apr;2019:846-849. doi: 10.1109/ISBI.2019.8759301. Epub 2019 Jul 11.
Recent radiomic studies have witnessed promising performance of deep learning techniques in learning radiomic features and fusing multimodal imaging data. Most existing deep learning based radiomic studies build predictive models in a setting of pattern classification, not appropriate for survival analysis studies where some data samples have incomplete observations. To improve existing survival analysis techniques whose performance is hinged on imaging features, we propose a deep learning method to build survival regression models by optimizing imaging features with deep convolutional neural networks (CNNs) in a proportional hazards model. To make the CNNs applicable to tumors with varied sizes, a spatial pyramid pooling strategy is adopted. Our method has been validated based on a simulated imaging dataset and a FDG-PET/CT dataset of rectal cancer patients treated for locally advanced rectal cancer. Compared with survival prediction models built upon hand-crafted radiomic features using Cox proportional hazards model and random survival forests, our method achieved competitive prediction performance.
最近的放射组学研究表明,深度学习技术在学习放射组学特征和融合多模态成像数据方面表现出了良好的前景。大多数现有的基于深度学习的放射组学研究都是在模式分类的背景下构建预测模型的,这不适用于一些数据样本观测不完整的生存分析研究。为了改进现有性能依赖于成像特征的生存分析技术,我们提出了一种深度学习方法,通过在比例风险模型中使用深度卷积神经网络(CNN)优化成像特征来构建生存回归模型。为了使CNN适用于不同大小的肿瘤,我们采用了空间金字塔池化策略。我们的方法已基于模拟成像数据集和局部晚期直肠癌患者的FDG-PET/CT数据集得到验证。与使用Cox比例风险模型和随机生存森林基于手工制作的放射组学特征构建的生存预测模型相比,我们的方法取得了具有竞争力的预测性能。