文献检索文档翻译深度研究
Suppr Zotero 插件Zotero 插件
邀请有礼套餐&价格历史记录

新学期,新优惠

限时优惠:9月1日-9月22日

30天高级会员仅需29元

1天体验卡首发特惠仅需5.99元

了解详情
不再提醒
插件&应用
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
高级版
套餐订阅购买积分包
AI 工具
文献检索文档翻译深度研究
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2025

利用 U-Nets 研究脂肪抑制和维度对乳腺 MRI 分割准确性的影响。

An investigation of the effect of fat suppression and dimensionality on the accuracy of breast MRI segmentation using U-nets.

机构信息

Physical Sciences, Sunnybrook Research Institute, Toronto, Ontario, M4N 3M5, Canada.

Department of Medical Biophysics, University of Toronto, Toronto, Ontario, M5G 1L7, Canada.

出版信息

Med Phys. 2019 Mar;46(3):1230-1244. doi: 10.1002/mp.13375. Epub 2019 Feb 4.


DOI:10.1002/mp.13375
PMID:30609062
Abstract

PURPOSE: Accurate segmentation of the breast is required for breast density estimation and the assessment of background parenchymal enhancement, both of which have been shown to be related to breast cancer risk. The MRI breast segmentation task is challenging, and recent work has demonstrated that convolutional neural networks perform well for this task. In this study, we have investigated the performance of several two-dimensional (2D) U-Net and three-dimensional (3D) U-Net configurations using both fat-suppressed and nonfat-suppressed images. We have also assessed the effect of changing the number and quality of the ground truth segmentations. MATERIALS AND METHODS: We designed eight studies to investigate the effect of input types and the dimensionality of the U-Net operations for the breast MRI segmentation. Our training data contained 70 whole breast volumes of T1-weighted sequences without fat suppression (WOFS) and with fat suppression (FS). For each subject, we registered the WOFS and FS volumes together before manually segmenting the breast to generate ground truth. We compared four different input types to the U-nets: WOFS, FS, MIXED (WOFS and FS images treated as separate samples), and MULTI (WOFS and FS images combined into a single multichannel image). We trained 2D U-Nets and 3D U-Nets with these data, which resulted in our eight studies (2D-WOFS, 3D-WOFS, 2D-FS, 3D-FS, 2D-MIXED, 3D-MIXED, 2D-MULTI, and 3D-MULT). For each of these studies, we performed a systematic grid search to tune the hyperparameters of the U-Nets. A separate validation set with 15 whole breast volumes was used for hyperparameter tuning. We performed Kruskal-Walis test on the results of our hyperparameter tuning and did not find a statistically significant difference in the ten top models of each study. For this reason, we chose the best model as the model with the highest mean dice similarity coefficient (DSC) value on the validation set. The reported test results are the results of the top model of each study on our test set which contained 19 whole breast volumes annotated by three readers fused with the STAPLE algorithm. We also investigated the effect of the quality of the training annotations and the number of training samples for this task. RESULTS: The study with the highest average DSC result was 3D-MULTI with 0.96 ± 0.02. The second highest average is 2D WOFS (0.96 ± 0.03), and the third is 2D MULTI (0.96 ± 0.03). We performed the Kruskal-Wallis one-way ANOVA test with Dunn's multiple comparison tests using Bonferroni P-value correction on the results of the selected model of each study and found that 3D-MULTI, 2D-MULTI, 3D-WOFS, 2D-WOFS, 2D-FS, and 3D-FS were not statistically different in their distributions, which indicates that comparable results could be obtained in fat-suppressed and nonfat-suppressed volumes and that there is no significant difference between the 3D and 2D approach. Our results also suggested that the networks trained on single sequence images or multiple sequence images organized in multichannel images perform better than the models trained on a mixture of volumes from different sequences. Our investigation of the size of the training set revealed that training a U-Net in this domain only requires a modest amount of training data and results obtained with 49 and 70 training datasets were not significantly different. CONCLUSIONS: To summarize, we investigated the use of 2D U-Nets and 3D U-Nets for breast volume segmentation in T1 fat-suppressed and without fat-suppressed volumes. Although our highest score was obtained in the 3D MULTI study, when we took advantage of information in both fat-suppressed and nonfat-suppressed volumes and their 3D structure, all of the methods we explored gave accurate segmentations with an average DSC on >94% demonstrating that the U-Net is a robust segmentation method for breast MRI volumes.

摘要

目的:准确分割乳房是进行乳腺密度估计和背景实质增强评估所必需的,这两者都与乳腺癌风险有关。MRI 乳房分割任务具有挑战性,最近的工作表明卷积神经网络在这项任务中表现良好。在这项研究中,我们研究了使用脂肪抑制和非脂肪抑制图像的几种二维(2D)U-Net 和三维(3D)U-Net 配置的性能。我们还评估了改变地面真实分割数量和质量的效果。

材料和方法:我们设计了八项研究来研究输入类型和 U-Net 操作的维度对乳腺 MRI 分割的影响。我们的训练数据包含 70 个 T1 加权序列的全乳体积,无脂肪抑制(WOFS)和有脂肪抑制(FS)。对于每个受试者,我们在手动分割乳房之前将 WOFS 和 FS 体积一起注册,以生成地面真实。我们将四种不同的输入类型与 U-Nets 进行了比较:WOFS、FS、MIXED(WOFS 和 FS 图像视为单独的样本)和 MULTI(WOFS 和 FS 图像组合成单个多通道图像)。我们使用这些数据训练了 2D U-Nets 和 3D U-Nets,这导致了我们的八项研究(2D-WOFS、3D-WOFS、2D-FS、3D-FS、2D-MIXED、3D-MIXED、2D-MULTI 和 3D-MULTI)。对于这些研究中的每一项,我们都进行了系统的网格搜索,以调整 U-Nets 的超参数。使用包含 15 个全乳体积的单独验证集来调整超参数。我们对超参数调整的结果进行了 Kruskal-Wallis 检验,没有发现每个研究的十个最佳模型之间存在统计学差异。因此,我们选择了最佳模型,即验证集上平均骰子相似系数(DSC)值最高的模型。报告的测试结果是每个研究的最佳模型在我们的测试集上的结果,该测试集包含 19 个全乳体积,由三位读者注释,融合了 STAPLE 算法。我们还研究了训练注释的质量和任务的训练样本数量对结果的影响。

结果:平均 DSC 最高的研究是 3D-MULTI,为 0.96±0.02。第二高的平均分数是 2D WOFS(0.96±0.03),第三高的是 2D MULTI(0.96±0.03)。我们对每个研究的选定模型的结果进行了 Kruskal-Wallis 单向方差分析检验,并使用 Bonferroni P 值校正进行了 Dunn 多重比较检验,发现 3D-MULTI、2D-MULTI、3D-WOFS、2D-WOFS、2D-FS 和 3D-FS 的分布没有统计学差异,这表明在脂肪抑制和非脂肪抑制体积中可以获得可比的结果,并且 3D 和 2D 方法之间没有显著差异。我们的结果还表明,在单序列图像或多序列图像组织成多通道图像上训练的网络比在不同序列的混合体积上训练的模型表现更好。我们对训练集大小的研究表明,在这个领域训练 U-Net 只需要相当数量的训练数据,并且使用 49 和 70 个训练数据集获得的结果没有显著差异。

结论:总之,我们研究了在 T1 脂肪抑制和无脂肪抑制体积中使用 2D U-Nets 和 3D U-Nets 进行乳房体积分割。虽然我们在 3D MULTI 研究中获得了最高分数,但当我们利用脂肪抑制和非脂肪抑制体积及其 3D 结构中的信息时,我们探索的所有方法都能准确分割,平均 DSC 超过 94%,这表明 U-Net 是一种用于乳腺 MRI 体积的强大分割方法。

相似文献

[1]
An investigation of the effect of fat suppression and dimensionality on the accuracy of breast MRI segmentation using U-nets.

Med Phys. 2019-2-4

[2]
Using deep learning to segment breast and fibroglandular tissue in MRI volumes.

Med Phys. 2017-2

[3]
Automated segmentation of the human supraclavicular fat depot via deep neural network in water-fat separated magnetic resonance images.

Quant Imaging Med Surg. 2023-7-1

[4]
U-Net breast lesion segmentations for breast dynamic contrast-enhanced magnetic resonance imaging.

J Med Imaging (Bellingham). 2023-11

[5]
Visual ensemble selection of deep convolutional neural networks for 3D segmentation of breast tumors on dynamic contrast enhanced MRI.

Eur Radiol. 2023-2

[6]
3D Breast Cancer Segmentation in DCE-MRI Using Deep Learning With Weak Annotation.

J Magn Reson Imaging. 2024-6

[7]
Development of U-Net Breast Density Segmentation Method for Fat-Sat MR Images Using Transfer Learning Based on Non-Fat-Sat Model.

J Digit Imaging. 2021-8

[8]
Exploiting the Dixon Method for a Robust Breast and Fibro-Glandular Tissue Segmentation in Breast MRI.

Diagnostics (Basel). 2022-7-11

[9]
Automated fibroglandular tissue segmentation in breast MRI using generative adversarial networks.

Phys Med Biol. 2020-5-19

[10]
Comprehensive Dynamic Contrast-Enhanced 3D Magnetic Resonance Imaging of the Breast With Fat/Water Separation and High Spatiotemporal Resolution Using Radial Sampling, Compressed Sensing, and Parallel Imaging.

Invest Radiol. 2017-10

引用本文的文献

[1]
Impact of menopause and age on breast density and background parenchymal enhancement in dynamic contrast-enhanced magnetic resonance imaging.

J Med Imaging (Bellingham). 2025-11

[2]
Accuracy of skull stripping in a single-contrast convolutional neural network model using eight-contrast magnetic resonance images.

Radiol Phys Technol. 2023-9

[3]
Machine learning on MRI radiomic features: identification of molecular subtype alteration in breast cancer after neoadjuvant therapy.

Eur Radiol. 2023-4

[4]
Exploiting the Dixon Method for a Robust Breast and Fibro-Glandular Tissue Segmentation in Breast MRI.

Diagnostics (Basel). 2022-7-11

[5]
Development of U-Net Breast Density Segmentation Method for Fat-Sat MR Images Using Transfer Learning Based on Non-Fat-Sat Model.

J Digit Imaging. 2021-8

[6]
Current Status and Future Perspectives of Artificial Intelligence in Magnetic Resonance Breast Imaging.

Contrast Media Mol Imaging. 2020

[7]
Current and Emerging Magnetic Resonance-Based Techniques for Breast Cancer.

Front Med (Lausanne). 2020-5-12

[8]
Machine learning in breast MRI.

J Magn Reson Imaging. 2020-10

[9]
An automated computational biomechanics workflow for improving breast cancer diagnosis and treatment.

Interface Focus. 2019-8-6

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

推荐工具

医学文档翻译智能文献检索