Suppr超能文献

评估磁共振图像质量指标与基于深度学习的脑肿瘤分割准确性之间的关系。

Evaluating the relationship between magnetic resonance image quality metrics and deep learning-based segmentation accuracy of brain tumors.

机构信息

Department of Imaging Physics, The University of Texas MD Anderson Cancer Center, Houston, Texas, USA.

Department of Computational and Applied Mathematics, Rice University, Houston, Texas, USA.

出版信息

Med Phys. 2024 Jul;51(7):4898-4906. doi: 10.1002/mp.17059. Epub 2024 Apr 19.

Abstract

BACKGROUND

Magnetic resonance imaging (MRI) scans are known to suffer from a variety of acquisition artifacts as well as equipment-based variations that impact image appearance and segmentation performance. It is still unclear whether a direct relationship exists between magnetic resonance (MR) image quality metrics (IQMs) (e.g., signal-to-noise, contrast-to-noise) and segmentation accuracy.

PURPOSE

Deep learning (DL) approaches have shown significant promise for automated segmentation of brain tumors on MRI but depend on the quality of input training images. We sought to evaluate the relationship between IQMs of input training images and DL-based brain tumor segmentation accuracy toward developing more generalizable models for multi-institutional data.

METHODS

We trained a 3D DenseNet model on the BraTS 2020 cohorts for segmentation of tumor subregions enhancing tumor (ET), peritumoral edematous, and necrotic and non-ET on MRI; with performance quantified via a 5-fold cross-validated Dice coefficient. MRI scans were evaluated through the open-source quality control tool MRQy, to yield 13 IQMs per scan. The Pearson correlation coefficient was computed between whole tumor (WT) dice values and IQM measures in the training cohorts to identify quality measures most correlated with segmentation performance. Each selected IQM was used to group MRI scans as "better" quality (BQ) or "worse" quality (WQ), via relative thresholding. Segmentation performance was re-evaluated for the DenseNet model when (i) training on BQ MRI images with validation on WQ images, as well as (ii) training on WQ images, and validation on BQ images. Trends were further validated on independent test sets derived from the BraTS 2021 training cohorts.

RESULTS

For this study, multimodal MRI scans from the BraTS 2020 training cohorts were used to train the segmentation model and validated on independent test sets derived from the BraTS 2021 cohort. Among the selected IQMs, models trained on BQ images based on inhomogeneity measurements (coefficient of variance, coefficient of joint variation, coefficient of variation of the foreground patch) and the models trained on WQ images based on noise measurement peak signal-to-noise ratio (SNR) yielded significantly improved tumor segmentation accuracy compared to their inverse models.

CONCLUSIONS

Our results suggest that a significant correlation may exist between specific MR IQMs and DenseNet-based brain tumor segmentation performance. The selection of MRI scans for model training based on IQMs may yield more accurate and generalizable models in unseen validation.

摘要

背景

磁共振成像(MRI)扫描会受到多种采集伪影以及设备变化的影响,这些因素会影响图像的外观和分割性能。目前尚不清楚磁共振(MR)图像质量指标(IQM)(例如,信噪比、对比噪声比)与分割准确性之间是否存在直接关系。

目的

深度学习(DL)方法已显示出在 MRI 上自动分割脑肿瘤的巨大潜力,但依赖于输入训练图像的质量。我们旨在评估输入训练图像的 IQM 与基于 DL 的脑肿瘤分割准确性之间的关系,以便为多机构数据开发更具通用性的模型。

方法

我们在 BraTS 2020 队列上使用 3D DenseNet 模型对肿瘤亚区进行分割,增强肿瘤(ET)、瘤周水肿和坏死以及非 ET;通过 5 折交叉验证 Dice 系数进行性能量化。通过开源质量控制工具 MRQy 评估 MRI 扫描,每个扫描产生 13 个 IQM。计算整个肿瘤(WT)骰子值与训练队列中 IQM 测量值之间的 Pearson 相关系数,以确定与分割性能最相关的质量测量值。通过相对阈值将每个选定的 IQM 用于将 MRI 扫描分组为“更好”质量(BQ)或“更差”质量(WQ)。当(i)在 BQ MRI 图像上进行训练并在 WQ 图像上进行验证,以及(ii)在 WQ 图像上进行训练并在 BQ 图像上进行验证时,重新评估 DenseNet 模型的分割性能。进一步在来自 BraTS 2021 训练队列的独立测试集上验证趋势。

结果

在这项研究中,使用来自 BraTS 2020 训练队列的多模态 MRI 扫描来训练分割模型,并在来自 BraTS 2021 队列的独立测试集上进行验证。在所选择的 IQM 中,基于各向异性测量值(方差系数、联合变化系数、前景斑块的变异系数)在 BQ 图像上训练的模型和基于噪声测量值峰值信噪比(SNR)在 WQ 图像上训练的模型与它们的逆模型相比,肿瘤分割准确性显著提高。

结论

我们的结果表明,特定的 MR IQM 与基于 DenseNet 的脑肿瘤分割性能之间可能存在显著相关性。基于 IQM 为模型训练选择 MRI 扫描可能会在未见的验证中产生更准确和更具通用性的模型。

相似文献

2
Magnetic resonance perfusion for differentiating low-grade from high-grade gliomas at first presentation.
Cochrane Database Syst Rev. 2018 Jan 22;1(1):CD011551. doi: 10.1002/14651858.CD011551.pub2.
3
Semi-Supervised Learning Allows for Improved Segmentation With Reduced Annotations of Brain Metastases Using Multicenter MRI Data.
J Magn Reson Imaging. 2025 Jun;61(6):2469-2479. doi: 10.1002/jmri.29686. Epub 2025 Jan 10.
4
Characterizing Breast Tumor Heterogeneity Through IVIM-DWI Parameters and Signal Decay Analysis.
Diagnostics (Basel). 2025 Jun 12;15(12):1499. doi: 10.3390/diagnostics15121499.
7
Improving brain atrophy quantification with deep learning from automated labels using tissue similarity priors.
Comput Biol Med. 2024 Sep;179:108811. doi: 10.1016/j.compbiomed.2024.108811. Epub 2024 Jul 10.
8
Predicting cognitive decline: Deep-learning reveals subtle brain changes in pre-MCI stage.
J Prev Alzheimers Dis. 2025 May;12(5):100079. doi: 10.1016/j.tjpad.2025.100079. Epub 2025 Feb 6.
9
Transformers for Neuroimage Segmentation: Scoping Review.
J Med Internet Res. 2025 Jan 29;27:e57723. doi: 10.2196/57723.

引用本文的文献

1
Multi-modal and Multi-view Cervical Spondylosis Imaging Dataset.
Sci Data. 2025 Jul 1;12(1):1080. doi: 10.1038/s41597-025-05403-z.
2

本文引用的文献

1
Technical Note: MRQy - An open-source tool for quality control of MR imaging data.
Med Phys. 2020 Dec;47(12):6029-6038. doi: 10.1002/mp.14593. Epub 2020 Nov 27.
2
GENERALIZABLE MULTI-SITE TRAINING AND TESTING OF DEEP NEURAL NETWORKS USING IMAGE NORMALIZATION.
Proc IEEE Int Symp Biomed Imaging. 2019 Apr;2019:348-351. doi: 10.1109/isbi.2019.8759295. Epub 2019 Jul 11.
3
UNet++: A Nested U-Net Architecture for Medical Image Segmentation.
Deep Learn Med Image Anal Multimodal Learn Clin Decis Support (2018). 2018 Sep;11045:3-11. doi: 10.1007/978-3-030-00889-5_1. Epub 2018 Sep 20.
5
HyperDense-Net: A Hyper-Densely Connected CNN for Multi-Modal Image Segmentation.
IEEE Trans Med Imaging. 2019 May;38(5):1116-1126. doi: 10.1109/TMI.2018.2878669. Epub 2018 Oct 30.
6
MRIQC: Advancing the automatic prediction of image quality in MRI from unseen sites.
PLoS One. 2017 Sep 25;12(9):e0184661. doi: 10.1371/journal.pone.0184661. eCollection 2017.
7
FULLY CONVOLUTIONAL NETWORKS FOR MULTI-MODALITY ISOINTENSE INFANT BRAIN IMAGE SEGMENTATION.
Proc IEEE Int Symp Biomed Imaging. 2016;2016:1342-1345. doi: 10.1109/ISBI.2016.7493515.
8
Multi-center MRI prediction models: Predicting sex and illness course in first episode psychosis patients.
Neuroimage. 2017 Jan 15;145(Pt B):246-253. doi: 10.1016/j.neuroimage.2016.07.027. Epub 2016 Jul 12.
9
The Multimodal Brain Tumor Image Segmentation Benchmark (BRATS).
IEEE Trans Med Imaging. 2015 Oct;34(10):1993-2024. doi: 10.1109/TMI.2014.2377694. Epub 2014 Dec 4.
10
The Design of SimpleITK.
Front Neuroinform. 2013 Dec 30;7:45. doi: 10.3389/fninf.2013.00045. eCollection 2013.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验