Key Laboratory of Image Processing and Intelligent Control of Ministry of Education of China, School of Automation, Huazhong University of Science and Technology, Wuhan 430074, People's Republic of China.
Phys Med Biol. 2018 Dec 21;64(1):015011. doi: 10.1088/1361-6560/aaf44b.
Automatic tumor segmentation from medical images is an important step for computer-aided cancer diagnosis and treatment. Recently, deep learning has been successfully applied to this task, leading to state-of-the-art performance. However, most of existing deep learning segmentation methods only work for a single imaging modality. PET/CT scanner is nowadays widely used in the clinic, and is able to provide both metabolic information and anatomical information through integrating PET and CT into the same utility. In this study, we proposed a novel multi-modality segmentation method based on a 3D fully convolutional neural network (FCN), which is capable of taking account of both PET and CT information simultaneously for tumor segmentation. The network started with a multi-task training module, in which two parallel sub-segmentation architectures constructed using deep convolutional neural networks (CNNs) were designed to automatically extract feature maps from PET and CT respectively. A feature fusion module was subsequently designed based on cascaded convolutional blocks, which re-extracted features from PET/CT feature maps using a weighted cross entropy minimization strategy. The tumor mask was obtained as the output at the end of the network using a softmax function. The effectiveness of the proposed method was validated on a clinic PET/CT dataset of 84 patients with lung cancer. The results demonstrated that the proposed network was effective, fast and robust and achieved significantly performance gain over CNN-based methods and traditional methods using PET or CT only, two V-net based co-segmentation methods, two variational co-segmentation methods based on fuzzy set theory and a deep learning co-segmentation method using W-net.
从医学图像中自动分割肿瘤是计算机辅助癌症诊断和治疗的重要步骤。最近,深度学习已成功应用于这项任务,取得了最先进的性能。然而,现有的大多数深度学习分割方法仅适用于单一成像模式。PET/CT 扫描仪在临床上得到了广泛的应用,它能够通过将 PET 和 CT 整合到同一个设备中提供代谢信息和解剖信息。在这项研究中,我们提出了一种新的基于 3D 全卷积神经网络(FCN)的多模态分割方法,该方法能够同时考虑肿瘤的 PET 和 CT 信息。该网络从一个多任务训练模块开始,该模块使用深度卷积神经网络(CNNs)构建了两个并行的子分割架构,分别自动从 PET 和 CT 中提取特征图。随后,基于级联卷积块设计了一个特征融合模块,该模块使用加权交叉熵最小化策略从 PET/CT 特征图中重新提取特征。最后,使用 softmax 函数作为输出获得肿瘤掩模。我们在一个包含 84 名肺癌患者的临床 PET/CT 数据集上验证了所提出方法的有效性。结果表明,所提出的网络是有效、快速和稳健的,与基于 CNN 的方法以及仅使用 PET 或 CT 的传统方法相比,具有显著的性能提升,优于两个基于 V-net 的共分割方法、两个基于模糊集理论的变分共分割方法以及一个使用 W-net 的深度学习共分割方法。