Department of Biomedical Engineering, College of Electronics and Information, Kyung Hee University, Yongin, Republic of Korea.
School of Mechanical Engineering, Sungkyunkwan University, Republic of Korea.
Comput Methods Programs Biomed. 2018 Aug;162:221-231. doi: 10.1016/j.cmpb.2018.05.027. Epub 2018 May 19.
Automatic segmentation of skin lesions in dermoscopy images is still a challenging task due to the large shape variations and indistinct boundaries of the lesions. Accurate segmentation of skin lesions is a key prerequisite step for any computer-aided diagnostic system to recognize skin melanoma.
In this paper, we propose a novel segmentation methodology via full resolution convolutional networks (FrCN). The proposed FrCN method directly learns the full resolution features of each individual pixel of the input data without the need for pre- or post-processing operations such as artifact removal, low contrast adjustment, or further enhancement of the segmented skin lesion boundaries. We evaluated the proposed method using two publicly available databases, the IEEE International Symposium on Biomedical Imaging (ISBI) 2017 Challenge and PH2 datasets. To evaluate the proposed method, we compared the segmentation performance with the latest deep learning segmentation approaches such as the fully convolutional network (FCN), U-Net, and SegNet.
Our results showed that the proposed FrCN method segmented the skin lesions with an average Jaccard index of 77.11% and an overall segmentation accuracy of 94.03% for the ISBI 2017 test dataset and 84.79% and 95.08%, respectively, for the PH2 dataset. In comparison to FCN, U-Net, and SegNet, the proposed FrCN outperformed them by 4.94%, 15.47%, and 7.48% for the Jaccard index and 1.31%, 3.89%, and 2.27% for the segmentation accuracy, respectively. Furthermore, the proposed FrCN achieved a segmentation accuracy of 95.62% for some representative clinical benign cases, 90.78% for the melanoma cases, and 91.29% for the seborrheic keratosis cases in the ISBI 2017 test dataset, exhibiting better performance than those of FCN, U-Net, and SegNet.
We conclude that using the full spatial resolutions of the input image could enable to learn better specific and prominent features, leading to an improvement in the segmentation performance.
由于病变的形状变化较大且边界不明显,因此自动对皮肤镜图像中的皮肤病变进行分割仍然是一项具有挑战性的任务。准确分割皮肤病变是任何计算机辅助诊断系统识别皮肤黑色素瘤的关键前提步骤。
在本文中,我们提出了一种通过全分辨率卷积网络(FrCN)进行分割的新方法。该方法直接学习输入数据中每个像素的全分辨率特征,而无需进行预处理或后处理操作,例如去除伪影、调整低对比度或进一步增强分割的皮肤病变边界。我们使用两个公开可用的数据库,即 IEEE 国际生物医学成像研讨会(ISBI)2017 挑战赛和 PH2 数据集来评估该方法。为了评估该方法,我们将分割性能与最新的深度学习分割方法(如全卷积网络(FCN)、U-Net 和 SegNet)进行了比较。
我们的结果表明,对于 ISBI 2017 测试数据集,该 FrCN 方法对皮肤病变的分割平均 Jaccard 指数为 77.11%,整体分割准确率为 94.03%;对于 PH2 数据集,分别为 84.79%和 95.08%。与 FCN、U-Net 和 SegNet 相比,该 FrCN 在 Jaccard 指数方面分别高出 4.94%、15.47%和 7.48%,在分割准确率方面分别高出 1.31%、3.89%和 2.27%。此外,对于 ISBI 2017 测试数据集中的一些有代表性的良性临床病例,该 FrCN 实现了 95.62%的分割准确率,对于黑色素瘤病例为 90.78%,对于脂溢性角化病病例为 91.29%,表现优于 FCN、U-Net 和 SegNet。
我们得出的结论是,使用输入图像的全空间分辨率可以更好地学习特定的和突出的特征,从而提高分割性能。