Khan Muhammad Attique, Sharif Muhammad, Akram Tallha, Damaševičius Robertas, Maskeliūnas Rytis
Department of Computer Science, Wah Campus, COMSATS University Islamabad, Wah Cantonment 47040, Pakistan.
Department of Electrical Engineering, Wah Campus, COMSATS University Islamabad, Islamabad 45550, Pakistan.
Diagnostics (Basel). 2021 Apr 29;11(5):811. doi: 10.3390/diagnostics11050811.
Manual diagnosis of skin cancer is time-consuming and expensive; therefore, it is essential to develop automated diagnostics methods with the ability to classify multiclass skin lesions with greater accuracy. We propose a fully automated approach for multiclass skin lesion segmentation and classification by using the most discriminant deep features. First, the input images are initially enhanced using local color-controlled histogram intensity values (LCcHIV). Next, saliency is estimated using a novel Deep Saliency Segmentation method, which uses a custom convolutional neural network (CNN) of ten layers. The generated heat map is converted into a binary image using a thresholding function. Next, the segmented color lesion images are used for feature extraction by a deep pre-trained CNN model. To avoid the curse of dimensionality, we implement an improved moth flame optimization (IMFO) algorithm to select the most discriminant features. The resultant features are fused using a multiset maximum correlation analysis (MMCA) and classified using the Kernel Extreme Learning Machine (KELM) classifier. The segmentation performance of the proposed methodology is analyzed on ISBI 2016, ISBI 2017, ISIC 2018, and PH2 datasets, achieving an accuracy of 95.38%, 95.79%, 92.69%, and 98.70%, respectively. The classification performance is evaluated on the HAM10000 dataset and achieved an accuracy of 90.67%. To prove the effectiveness of the proposed methods, we present a comparison with the state-of-the-art techniques.
皮肤癌的人工诊断既耗时又昂贵;因此,开发能够更准确地对多类皮肤病变进行分类的自动化诊断方法至关重要。我们提出了一种通过使用最具判别力的深度特征来对多类皮肤病变进行分割和分类的全自动方法。首先,使用局部颜色控制直方图强度值(LCcHIV)对输入图像进行初步增强。接下来,使用一种新颖的深度显著性分割方法估计显著性,该方法使用一个十层的自定义卷积神经网络(CNN)。使用阈值函数将生成的热图转换为二值图像。接下来,分割后的彩色病变图像由深度预训练的CNN模型用于特征提取。为了避免维数灾难,我们实现了一种改进的蛾火焰优化(IMFO)算法来选择最具判别力的特征。使用多集最大相关分析(MMCA)融合所得特征,并使用核极限学习机(KELM)分类器进行分类。在ISBI 2016、ISBI 2017、ISIC 2018和PH2数据集上分析了所提出方法的分割性能,分别达到了95.38%、95.79%、92.69%和98.70%的准确率。在HAM10000数据集上评估了分类性能,准确率达到了90.67%。为了证明所提出方法的有效性,我们与现有技术进行了比较。