Çevik Nazife, Çevik Taner, Osman Onur, Alsubai Shtwai, Rasheed Jawad
Department of Computer Engineering, Istanbul Arel University, Istanbul, Türkiye.
Department of Computer Engineering, Istanbul Rumeli University, Istanbul, Türkiye.
Front Med (Lausanne). 2025 Jul 16;12:1589587. doi: 10.3389/fmed.2025.1589587. eCollection 2025.
Accurate medical image segmentation significantly impacts patient outcomes, especially in diseases such as skin cancer, intestinal polyps, and brain tumors. While deep learning methods have shown promise, their performance often varies across datasets and modalities. Combining advanced segmentation techniques with traditional feature extraction approaches may enhance robustness and generalizability.
This study aims to develop an integrated framework combining segmentation, advanced feature extraction, and transfer learning to enhance segmentation accuracy across diverse medical imaging (MI) datasets, thus improving classification accuracy and generalization capabilities.
We employed independently trained U-Net models to segment skin cancer, polyps, and brain tumor regions from three separate MI datasets (HAM10000, Kvasir-SEG, and Figshare Brain Tumor dataset). Moreover, the study applied classical texture-based feature extraction methods, namely Local Binary Patterns (LBP) and Gray-Level Co-occurrence Matrix (GLCM), processing each Red Green Blue (RGB) channel separately using an offset [0 1] and recombining them to create comprehensive texture descriptors. These segmented images and extracted features were subsequently fine-tuned pre-trained transfer learning models. We also assessed the combined performance on an integrated dataset comprising all three modalities. Classification was performed using Support Vector Machines (SVM), and results were evaluated based on accuracy, recall (sensitivity), specificity, and the F-measure, alongside bias-variance analysis for model generalization capability.
U-Net segmentation achieved high accuracy across datasets, with particularly notable results for polyps (98.00%) and brain tumors (99.66%). LBP consistently showed superior performance, especially in skin cancer and polyp datasets, achieving up to 98.80% accuracy. Transfer learning improved segmentation accuracy and generalizability, particularly evident in skin cancer (85.39%) and brain tumor (99.13%) datasets. When datasets were combined, the proposed methods achieved high generalization capability, with the U-Net model achieving 95.20% accuracy. After segmenting the lesion regions using U-Net, LBP features were extracted and classified using an SVM model, achieving 99.22% classification accuracy on the combined dataset (skin, polyp, and brain).
Integrating deep learning-based segmentation (U-Net), classical feature extraction techniques (GLCM and LBP), and transfer learning significantly enhanced the accuracy and generalization capabilities across multiple MI datasets. The methodology provides robust, versatile framework applicable to various MI tasks, supporting advancements in diagnostic precision and clinical decision-making.
准确的医学图像分割对患者的治疗结果有重大影响,尤其是在皮肤癌、肠息肉和脑肿瘤等疾病中。虽然深度学习方法已显示出前景,但其性能在不同数据集和模态之间往往存在差异。将先进的分割技术与传统特征提取方法相结合可能会提高鲁棒性和通用性。
本研究旨在开发一个集成框架,将分割、先进特征提取和迁移学习相结合,以提高跨多种医学成像(MI)数据集的分割准确性,从而提高分类准确性和泛化能力。
我们使用独立训练的U-Net模型从三个单独的MI数据集(HAM10000、Kvasir-SEG和Figshare脑肿瘤数据集)中分割皮肤癌、息肉和脑肿瘤区域。此外,该研究应用了基于经典纹理的特征提取方法,即局部二值模式(LBP)和灰度共生矩阵(GLCM),使用偏移量[0 1]分别处理每个红绿蓝(RGB)通道,并将它们重新组合以创建全面的纹理描述符。随后,对这些分割图像和提取的特征进行预训练迁移学习模型的微调。我们还在包含所有三种模态的集成数据集上评估了组合性能。使用支持向量机(SVM)进行分类,并基于准确性、召回率(敏感性)、特异性和F值评估结果,同时进行偏差-方差分析以评估模型的泛化能力。
U-Net分割在各数据集上均实现了高精度,息肉(98.00%)和脑肿瘤(99.66%)的结果尤为显著。LBP始终表现出卓越的性能,尤其是在皮肤癌和息肉数据集中,准确率高达98.80%。迁移学习提高了分割准确性和泛化能力,在皮肤癌(85.39%)和脑肿瘤(99.13%)数据集中尤为明显。当数据集合并时,所提出的方法具有很高的泛化能力,U-Net模型的准确率达到95.20%。使用U-Net分割病变区域后,提取LBP特征并使用SVM模型进行分类,在组合数据集(皮肤、息肉和脑)上的分类准确率达到99.22%。
将基于深度学习的分割(U-Net)、经典特征提取技术(GLCM和LBP)和迁移学习相结合,显著提高了跨多个MI数据集的准确性和泛化能力。该方法提供了一个强大、通用的框架,适用于各种MI任务,支持诊断精度和临床决策的进步。