Annu Int Conf IEEE Eng Med Biol Soc. 2022 Jul;2022:5051-5054. doi: 10.1109/EMBC48229.2022.9871370.
Automated skin cancer diagnosis is challenging due to inter-class uniformity, intra-class variation, and the complex structure of dermoscopy images. Convolutional neural networks (CNN) have recently made considerable progress in melanoma classification, even in the presence of limited skin images. One of the drawbacks of these methods is the loss of image details caused by downsampling high-resolution skin images to a low resolution. Further, most approaches extract features only from the whole skin image. This paper proposes an ensemble feature fusion and sparse autoencoder (SAE) based framework to overcome the above issues and improve melanoma classification performance. The proposed method extracts features from two streams, local and global, using a pre-trained CNN model. The local stream extracts features from image patches, while the global stream derives features from the whole skin image, preserving both local and global representation. The features are then fused, and an SAE framework is subsequently designed to enrich the feature representation further. The proposed method is validated on ISIC 2016 dataset and the experimental results indicate the superiority of the proposed approach.
自动化皮肤癌诊断具有挑战性,原因是类间一致性、类内变异性和皮肤镜图像的复杂结构。卷积神经网络 (CNN) 最近在黑色素瘤分类方面取得了重大进展,即使在皮肤图像有限的情况下也是如此。这些方法的一个缺点是,通过将高分辨率皮肤图像下采样到低分辨率,会导致图像细节丢失。此外,大多数方法仅从整个皮肤图像中提取特征。本文提出了一种基于集成特征融合和稀疏自编码器 (SAE) 的框架,以克服上述问题并提高黑色素瘤分类性能。该方法使用预训练的 CNN 模型从两个流,即局部和全局流中提取特征。局部流从图像块中提取特征,而全局流从整个皮肤图像中提取特征,同时保留局部和全局表示。然后融合特征,并随后设计 SAE 框架来进一步丰富特征表示。该方法在 ISIC 2016 数据集上进行了验证,实验结果表明了所提出方法的优越性。