Suppr超能文献

利用U-Net和迁移学习增强MRI图像中的脑肿瘤分割

A brain tumor segmentation enhancement in MRI images using U-Net and transfer learning.

作者信息

Pourmahboubi Amin, Arsalani Saeed Nazanin, Tabrizchi Hamed

机构信息

Department of Computer Science, Faculty of Mathematics, Statistics, and Computer Science, University of Tabriz, Tabriz, East Azerbaijan, Iran.

Department of Biology, Faculty of Natural Science, University of Tabriz, Tabriz, East Azerbaijan, Iran.

出版信息

BMC Med Imaging. 2025 Jul 31;25(1):307. doi: 10.1186/s12880-025-01837-4.

Abstract

This paper presents a novel transfer learning approach for segmenting brain tumors in Magnetic Resonance Imaging (MRI) images. Using Fluid-Attenuated Inversion Recovery (FLAIR) abnormality segmentation masks and MRI scans from The Cancer Genome Atlas's (TCGA's) lower-grade glioma collection, our proposed approach uses a VGG19-based U-Net architecture with fixed pretrained weights. The experimental findings, which show an Area Under the Curve (AUC) of 0.9957, F1-Score of 0.9679, Dice Coefficient of 0.9679, Precision of 0.9541, Recall of 0.9821, and Intersection-over-Union (IoU) of 0.9378, show how effective the proposed framework is. According to these metrics, the VGG19-powered U-Net outperforms not only the conventional U-Net model but also other variants that were compared and used different pre-trained backbones in the U-Net encoder.Clinical trial registrationNot applicable as this study utilized existing publicly available dataset and did not involve a clinical trial.

摘要

本文提出了一种用于在磁共振成像(MRI)图像中分割脑肿瘤的新型迁移学习方法。利用液体衰减反转恢复(FLAIR)异常分割掩码以及来自癌症基因组图谱(TCGA)低级别胶质瘤数据集的MRI扫描,我们提出的方法使用了基于VGG19且带有固定预训练权重的U-Net架构。实验结果显示曲线下面积(AUC)为0.9957、F1分数为0.9679、骰子系数为0.9679、精度为0.9541、召回率为0.9821以及交并比(IoU)为0.9378,表明了所提出框架的有效性。根据这些指标,由VGG19驱动的U-Net不仅优于传统的U-Net模型,还优于在U-Net编码器中使用不同预训练骨干网络并进行比较的其他变体。

临床试验注册

本研究使用了现有的公开可用数据集且未涉及临床试验,因此不适用。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验