Centre for Intelligent Signal and Imaging Research (CISIR), Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, Seri Iskandar 32610, Malaysia.
ImViA/ITFIM, University of Burgundy, 21078 Dijon , France.
Sensors (Basel). 2020 Jun 3;20(11):3183. doi: 10.3390/s20113183.
In this paper, we present an evaluation of four encoder-decoder CNNs in the segmentation of the prostate gland in T2W magnetic resonance imaging (MRI) image. The four selected CNNs are FCN, SegNet, U-Net, and DeepLabV3+, which was originally proposed for the segmentation of road scene, biomedical, and natural images. Segmentation of prostate in T2W MRI images is an important step in the automatic diagnosis of prostate cancer to enable better lesion detection and staging of prostate cancer. Therefore, many research efforts have been conducted to improve the segmentation of the prostate gland in MRI images. The main challenges of prostate gland segmentation are blurry prostate boundary and variability in prostate anatomical structure. In this work, we investigated the performance of encoder-decoder CNNs for segmentation of prostate gland in T2W MRI. Image pre-processing techniques including image resizing, center-cropping and intensity normalization are applied to address the issues of inter-patient and inter-scanner variability as well as the issue of dominating background pixels over prostate pixels. In addition, to enrich the network with more data, to increase data variation, and to improve its accuracy, patch extraction and data augmentation are applied prior to training the networks. Furthermore, class weight balancing is used to avoid having biased networks since the number of background pixels is much higher than the prostate pixels. The class imbalance problem is solved by utilizing weighted cross-entropy loss function during the training of the CNN model. The performance of the CNNs is evaluated in terms of the Dice similarity coefficient (DSC) and our experimental results show that patch-wise DeepLabV3+ gives the best performance with DSC equal to 92 . 8 % . This value is the highest DSC score compared to the FCN, SegNet, and U-Net that also competed the recently published state-of-the-art method of prostate segmentation.
在本文中,我们评估了四个编码器-解码器 CNN 在 T2W 磁共振成像 (MRI) 图像前列腺分割中的性能。所选择的四个 CNN 是 FCN、SegNet、U-Net 和 DeepLabV3+,它们最初是为道路场景、生物医学和自然图像的分割而提出的。T2W MRI 图像中前列腺的分割是前列腺癌自动诊断的重要步骤,可实现更好的病灶检测和前列腺癌分期。因此,许多研究工作都致力于提高 MRI 图像中前列腺的分割。前列腺分割的主要挑战是前列腺边界模糊和前列腺解剖结构的可变性。在这项工作中,我们研究了用于 T2W MRI 中前列腺分割的编码器-解码器 CNN 的性能。应用图像预处理技术,包括图像缩放、中心裁剪和强度归一化,以解决患者间和扫描仪间的可变性问题以及主导背景像素超过前列腺像素的问题。此外,为了用更多的数据丰富网络,增加数据变化,并提高其准确性,在训练网络之前应用了补丁提取和数据增强。此外,通过使用加权交叉熵损失函数在 CNN 模型的训练过程中,使用类权重平衡来避免网络的偏差,因为背景像素的数量远高于前列腺像素。CNN 的性能是根据骰子相似系数 (DSC) 来评估的,我们的实验结果表明,逐块 DeepLabV3+ 的表现最佳,DSC 等于 92.8%。与 FCN、SegNet 和 U-Net 相比,这是最高的 DSC 分数,这三个模型也与最近发表的前列腺分割的最先进方法竞争。