Mahum Rabbia, Aladhadh Suliman
Department of Computer Science, University of Engineering and Technology, Taxila, Taxila 47040, Pakistan.
Department of Information Technology, College of Computer, Qassim University, Buraydah 51452, Saudi Arabia.
Diagnostics (Basel). 2022 Nov 28;12(12):2974. doi: 10.3390/diagnostics12122974.
The abnormal growth of cells in the skin causes two types of tumor: benign and malignant. Various methods, such as imaging and biopsies, are used by oncologists to assess the presence of skin cancer, but these are time-consuming and require extra human effort. However, some automated methods have been developed by researchers based on hand-crafted feature extraction from skin images. Nevertheless, these methods may fail to detect skin cancers at an early stage if they are tested on unseen data. Therefore, in this study, a novel and robust skin cancer detection model was proposed based on features fusion. First, our proposed model pre-processed the images using a GF filter to remove the noise. Second, the features were manually extracted by employing local binary patterns (LBP), and Inception V3 for automatic feature extraction. Aside from this, an Adam optimizer was utilized for the adjustments of learning rate. In the end, LSTM network was utilized on fused features for the classification of skin cancer into malignant and benign. Our proposed system employs the benefits of both ML- and DL-based algorithms. We utilized the skin lesion DermIS dataset, which is available on the Kaggle website and consists of 1000 images, out of which 500 belong to the benign class and 500 to the malignant class. The proposed methodology attained 99.4% accuracy, 98.7% precision, 98.66% recall, and a 98% F-score. We compared the performance of our features fusion-based method with existing segmentation-based and DL-based techniques. Additionally, we cross-validated the performance of our proposed model using 1000 images from International Skin Image Collection (ISIC), attaining 98.4% detection accuracy. The results show that our method provides significant results compared to existing techniques and outperforms them.
良性和恶性。肿瘤学家使用各种方法,如图像检查和活检,来评估皮肤癌的存在,但这些方法既耗时又需要额外的人力。然而,研究人员已经开发了一些基于从皮肤图像中手工提取特征的自动化方法。尽管如此,如果在未见过的数据上进行测试,这些方法可能无法在早期检测出皮肤癌。因此,在本研究中,提出了一种基于特征融合的新颖且强大的皮肤癌检测模型。首先,我们提出的模型使用高斯滤波器对图像进行预处理以去除噪声。其次,通过使用局部二值模式(LBP)手动提取特征,并使用Inception V3进行自动特征提取。除此之外,使用Adam优化器来调整学习率。最后,将长短期记忆(LSTM)网络应用于融合特征,以将皮肤癌分类为恶性和良性。我们提出的系统利用了基于机器学习和深度学习算法的优点。我们使用了皮肤病变DermIS数据集,该数据集可在Kaggle网站上获取,由1000张图像组成,其中500张属于良性类别,500张属于恶性类别。所提出的方法达到了99.4%的准确率、98.7%的精确率、98.66%的召回率和98%的F值。我们将基于特征融合的方法与现有的基于分割和基于深度学习的技术的性能进行了比较。此外,我们使用来自国际皮肤图像集(ISIC)的1000张图像对我们提出的模型的性能进行了交叉验证,检测准确率达到了98.4%。结果表明,与现有技术相比,我们的方法提供了显著的结果并且优于它们。