• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

利用人工智能推进患者护理:一个使用迁移学习和混合特征提取的医学图像分割统一框架。

Advancing patient care with AI: a unified framework for medical image segmentation using transfer learning and hybrid feature extraction.

作者信息

Çevik Nazife, Çevik Taner, Osman Onur, Alsubai Shtwai, Rasheed Jawad

机构信息

Department of Computer Engineering, Istanbul Arel University, Istanbul, Türkiye.

Department of Computer Engineering, Istanbul Rumeli University, Istanbul, Türkiye.

出版信息

Front Med (Lausanne). 2025 Jul 16;12:1589587. doi: 10.3389/fmed.2025.1589587. eCollection 2025.

DOI:10.3389/fmed.2025.1589587
PMID:40740955
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12307313/
Abstract

BACKGROUND

Accurate medical image segmentation significantly impacts patient outcomes, especially in diseases such as skin cancer, intestinal polyps, and brain tumors. While deep learning methods have shown promise, their performance often varies across datasets and modalities. Combining advanced segmentation techniques with traditional feature extraction approaches may enhance robustness and generalizability.

OBJECTIVE

This study aims to develop an integrated framework combining segmentation, advanced feature extraction, and transfer learning to enhance segmentation accuracy across diverse medical imaging (MI) datasets, thus improving classification accuracy and generalization capabilities.

METHODS

We employed independently trained U-Net models to segment skin cancer, polyps, and brain tumor regions from three separate MI datasets (HAM10000, Kvasir-SEG, and Figshare Brain Tumor dataset). Moreover, the study applied classical texture-based feature extraction methods, namely Local Binary Patterns (LBP) and Gray-Level Co-occurrence Matrix (GLCM), processing each Red Green Blue (RGB) channel separately using an offset [0 1] and recombining them to create comprehensive texture descriptors. These segmented images and extracted features were subsequently fine-tuned pre-trained transfer learning models. We also assessed the combined performance on an integrated dataset comprising all three modalities. Classification was performed using Support Vector Machines (SVM), and results were evaluated based on accuracy, recall (sensitivity), specificity, and the F-measure, alongside bias-variance analysis for model generalization capability.

RESULTS

U-Net segmentation achieved high accuracy across datasets, with particularly notable results for polyps (98.00%) and brain tumors (99.66%). LBP consistently showed superior performance, especially in skin cancer and polyp datasets, achieving up to 98.80% accuracy. Transfer learning improved segmentation accuracy and generalizability, particularly evident in skin cancer (85.39%) and brain tumor (99.13%) datasets. When datasets were combined, the proposed methods achieved high generalization capability, with the U-Net model achieving 95.20% accuracy. After segmenting the lesion regions using U-Net, LBP features were extracted and classified using an SVM model, achieving 99.22% classification accuracy on the combined dataset (skin, polyp, and brain).

CONCLUSION

Integrating deep learning-based segmentation (U-Net), classical feature extraction techniques (GLCM and LBP), and transfer learning significantly enhanced the accuracy and generalization capabilities across multiple MI datasets. The methodology provides robust, versatile framework applicable to various MI tasks, supporting advancements in diagnostic precision and clinical decision-making.

摘要

背景

准确的医学图像分割对患者的治疗结果有重大影响,尤其是在皮肤癌、肠息肉和脑肿瘤等疾病中。虽然深度学习方法已显示出前景,但其性能在不同数据集和模态之间往往存在差异。将先进的分割技术与传统特征提取方法相结合可能会提高鲁棒性和通用性。

目的

本研究旨在开发一个集成框架,将分割、先进特征提取和迁移学习相结合,以提高跨多种医学成像(MI)数据集的分割准确性,从而提高分类准确性和泛化能力。

方法

我们使用独立训练的U-Net模型从三个单独的MI数据集(HAM10000、Kvasir-SEG和Figshare脑肿瘤数据集)中分割皮肤癌、息肉和脑肿瘤区域。此外,该研究应用了基于经典纹理的特征提取方法,即局部二值模式(LBP)和灰度共生矩阵(GLCM),使用偏移量[0 1]分别处理每个红绿蓝(RGB)通道,并将它们重新组合以创建全面的纹理描述符。随后,对这些分割图像和提取的特征进行预训练迁移学习模型的微调。我们还在包含所有三种模态的集成数据集上评估了组合性能。使用支持向量机(SVM)进行分类,并基于准确性、召回率(敏感性)、特异性和F值评估结果,同时进行偏差-方差分析以评估模型的泛化能力。

结果

U-Net分割在各数据集上均实现了高精度,息肉(98.00%)和脑肿瘤(99.66%)的结果尤为显著。LBP始终表现出卓越的性能,尤其是在皮肤癌和息肉数据集中,准确率高达98.80%。迁移学习提高了分割准确性和泛化能力,在皮肤癌(85.39%)和脑肿瘤(99.13%)数据集中尤为明显。当数据集合并时,所提出的方法具有很高的泛化能力,U-Net模型的准确率达到95.20%。使用U-Net分割病变区域后,提取LBP特征并使用SVM模型进行分类,在组合数据集(皮肤、息肉和脑)上的分类准确率达到99.22%。

结论

将基于深度学习的分割(U-Net)、经典特征提取技术(GLCM和LBP)和迁移学习相结合,显著提高了跨多个MI数据集的准确性和泛化能力。该方法提供了一个强大、通用的框架,适用于各种MI任务,支持诊断精度和临床决策的进步。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d06b/12307313/11a714967685/fmed-12-1589587-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d06b/12307313/daae7d5706dd/fmed-12-1589587-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d06b/12307313/cab509c5c3ed/fmed-12-1589587-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d06b/12307313/53dcf4a65ae0/fmed-12-1589587-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d06b/12307313/90b5aaa1f1de/fmed-12-1589587-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d06b/12307313/bbb07945961a/fmed-12-1589587-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d06b/12307313/5222eb588aed/fmed-12-1589587-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d06b/12307313/c3099fb8d0fc/fmed-12-1589587-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d06b/12307313/15bb3194f7c0/fmed-12-1589587-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d06b/12307313/de1fb57885d8/fmed-12-1589587-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d06b/12307313/a3b0fb82934a/fmed-12-1589587-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d06b/12307313/38840ba9247b/fmed-12-1589587-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d06b/12307313/d5eb7ba89721/fmed-12-1589587-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d06b/12307313/93c0d8052ed2/fmed-12-1589587-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d06b/12307313/11a714967685/fmed-12-1589587-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d06b/12307313/daae7d5706dd/fmed-12-1589587-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d06b/12307313/cab509c5c3ed/fmed-12-1589587-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d06b/12307313/53dcf4a65ae0/fmed-12-1589587-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d06b/12307313/90b5aaa1f1de/fmed-12-1589587-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d06b/12307313/bbb07945961a/fmed-12-1589587-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d06b/12307313/5222eb588aed/fmed-12-1589587-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d06b/12307313/c3099fb8d0fc/fmed-12-1589587-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d06b/12307313/15bb3194f7c0/fmed-12-1589587-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d06b/12307313/de1fb57885d8/fmed-12-1589587-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d06b/12307313/a3b0fb82934a/fmed-12-1589587-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d06b/12307313/38840ba9247b/fmed-12-1589587-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d06b/12307313/d5eb7ba89721/fmed-12-1589587-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d06b/12307313/93c0d8052ed2/fmed-12-1589587-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d06b/12307313/11a714967685/fmed-12-1589587-g014.jpg

相似文献

1
Advancing patient care with AI: a unified framework for medical image segmentation using transfer learning and hybrid feature extraction.利用人工智能推进患者护理:一个使用迁移学习和混合特征提取的医学图像分割统一框架。
Front Med (Lausanne). 2025 Jul 16;12:1589587. doi: 10.3389/fmed.2025.1589587. eCollection 2025.
2
A medical image classification method based on self-regularized adversarial learning.基于自正则化对抗学习的医学图像分类方法。
Med Phys. 2024 Nov;51(11):8232-8246. doi: 10.1002/mp.17320. Epub 2024 Jul 30.
3
Leveraging a foundation model zoo for cell similarity search in oncological microscopy across devices.利用基础模型库进行跨设备肿瘤显微镜检查中的细胞相似性搜索。
Front Oncol. 2025 Jun 18;15:1480384. doi: 10.3389/fonc.2025.1480384. eCollection 2025.
4
A deep learning approach to direct immunofluorescence pattern recognition in autoimmune bullous diseases.深度学习方法在自身免疫性大疱性疾病中的直接免疫荧光模式识别。
Br J Dermatol. 2024 Jul 16;191(2):261-266. doi: 10.1093/bjd/ljae142.
5
..
Int Ophthalmol. 2025 Jun 27;45(1):266. doi: 10.1007/s10792-025-03602-6.
6
Brain tumor segmentation with deep learning: Current approaches and future perspectives.基于深度学习的脑肿瘤分割:当前方法与未来展望。
J Neurosci Methods. 2025 Jun;418:110424. doi: 10.1016/j.jneumeth.2025.110424. Epub 2025 Mar 21.
7
The impact of uncertainty estimation on radiomic segmentation reproducibility and scan-rescan repeatability in kidney MRI.不确定性估计对肾脏MRI中放射组学分割再现性和扫描-重扫重复性的影响。
Med Phys. 2025 Jul;52(7):e17995. doi: 10.1002/mp.17995.
8
Artificial intelligence for diagnosing exudative age-related macular degeneration.人工智能在渗出性年龄相关性黄斑变性诊断中的应用。
Cochrane Database Syst Rev. 2024 Oct 17;10(10):CD015522. doi: 10.1002/14651858.CD015522.pub2.
9
Thymoma habitat segmentation and risk prediction model using CT imaging and K-means clustering.基于CT成像和K均值聚类的胸腺瘤生长部位分割及风险预测模型
Med Phys. 2025 Jul;52(7):e17892. doi: 10.1002/mp.17892. Epub 2025 May 19.
10
Stabilizing machine learning for reproducible and explainable results: A novel validation approach to subject-specific insights.稳定机器学习以获得可重复和可解释的结果:一种针对特定个体见解的新型验证方法。
Comput Methods Programs Biomed. 2025 Jun 21;269:108899. doi: 10.1016/j.cmpb.2025.108899.

本文引用的文献

1
Hematoma expansion prediction: still navigating the intersection of deep learning and radiomics.血肿扩大预测:仍在深度学习与放射组学的交叉领域中探索。
Eur Radiol. 2024 May;34(5):2905-2907. doi: 10.1007/s00330-024-10586-x. Epub 2024 Jan 22.
2
TotalSegmentator: Robust Segmentation of 104 Anatomic Structures in CT Images.全段分割器:CT图像中104种解剖结构的稳健分割
Radiol Artif Intell. 2023 Jul 5;5(5):e230024. doi: 10.1148/ryai.230024. eCollection 2023 Sep.
3
A review of deep learning in medical imaging: Imaging traits, technology trends, case studies with progress highlights, and future promises.
医学成像中的深度学习综述:成像特征、技术趋势、具有进展亮点的案例研究及未来展望。
Proc IEEE Inst Electr Electron Eng. 2021 May;109(5):820-838. doi: 10.1109/JPROC.2021.3054390. Epub 2021 Feb 26.
4
Omics-based deep learning approaches for lung cancer decision-making and therapeutics development.基于组学的深度学习方法在肺癌决策和治疗开发中的应用。
Brief Funct Genomics. 2024 May 15;23(3):181-192. doi: 10.1093/bfgp/elad031.
5
Histogram of Oriented Gradients meet deep learning: A novel multi-task deep network for 2D surgical image semantic segmentation.方向梯度直方图与深度学习的结合:一种用于 2D 手术图像语义分割的新型多任务深度网络。
Med Image Anal. 2023 Apr;85:102747. doi: 10.1016/j.media.2023.102747. Epub 2023 Jan 13.
6
The Medical Segmentation Decathlon.医学分割十项全能
Nat Commun. 2022 Jul 15;13(1):4128. doi: 10.1038/s41467-022-30695-9.
7
Instant diagnosis of gastroscopic biopsy via deep-learned single-shot femtosecond stimulated Raman histology.通过深度学习单发飞秒刺激拉曼组织学实现胃镜活检的即时诊断。
Nat Commun. 2022 Jul 13;13(1):4050. doi: 10.1038/s41467-022-31339-8.
8
Application of Green Gold Nanoparticles in Cancer Therapy and Diagnosis.绿色金纳米颗粒在癌症治疗与诊断中的应用。
Nanomaterials (Basel). 2022 Mar 27;12(7):1102. doi: 10.3390/nano12071102.
9
The Role of Artificial Intelligence in Early Cancer Diagnosis.人工智能在癌症早期诊断中的作用。
Cancers (Basel). 2022 Mar 16;14(6):1524. doi: 10.3390/cancers14061524.
10
AbdomenCT-1K: Is Abdominal Organ Segmentation a Solved Problem?腹部 CT-1K:腹部器官分割是否已经解决?
IEEE Trans Pattern Anal Mach Intell. 2022 Oct;44(10):6695-6714. doi: 10.1109/TPAMI.2021.3100536. Epub 2022 Sep 14.