• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

利用深度学习对磁共振成像(MRI)容积中的乳腺和纤维腺组织进行分割。

Using deep learning to segment breast and fibroglandular tissue in MRI volumes.

作者信息

Dalmış Mehmet Ufuk, Litjens Geert, Holland Katharina, Setio Arnaud, Mann Ritse, Karssemeijer Nico, Gubern-Mérida Albert

机构信息

Radboud University Medical Center, Geert Grooteplein 10, 6525 GA, Nijmegen, The Netherlands.

出版信息

Med Phys. 2017 Feb;44(2):533-546. doi: 10.1002/mp.12079.

DOI:10.1002/mp.12079
PMID:28035663
Abstract

PURPOSE

Automated segmentation of breast and fibroglandular tissue (FGT) is required for various computer-aided applications of breast MRI. Traditional image analysis and computer vision techniques, such atlas, template matching, or, edge and surface detection, have been applied to solve this task. However, applicability of these methods is usually limited by the characteristics of the images used in the study datasets, while breast MRI varies with respect to the different MRI protocols used, in addition to the variability in breast shapes. All this variability, in addition to various MRI artifacts, makes it a challenging task to develop a robust breast and FGT segmentation method using traditional approaches. Therefore, in this study, we investigated the use of a deep-learning approach known as "U-net."

MATERIALS AND METHODS

We used a dataset of 66 breast MRI's randomly selected from our scientific archive, which includes five different MRI acquisition protocols and breasts from four breast density categories in a balanced distribution. To prepare reference segmentations, we manually segmented breast and FGT for all images using an in-house developed workstation. We experimented with the application of U-net in two different ways for breast and FGT segmentation. In the first method, following the same pipeline used in traditional approaches, we trained two consecutive (2C) U-nets: first for segmenting the breast in the whole MRI volume and the second for segmenting FGT inside the segmented breast. In the second method, we used a single 3-class (3C) U-net, which performs both tasks simultaneously by segmenting the volume into three regions: nonbreast, fat inside the breast, and FGT inside the breast. For comparison, we applied two existing and published methods to our dataset: an atlas-based method and a sheetness-based method. We used Dice Similarity Coefficient (DSC) to measure the performances of the automated methods, with respect to the manual segmentations. Additionally, we computed Pearson's correlation between the breast density values computed based on manual and automated segmentations.

RESULTS

The average DSC values for breast segmentation were 0.933, 0.944, 0.863, and 0.848 obtained from 3C U-net, 2C U-nets, atlas-based method, and sheetness-based method, respectively. The average DSC values for FGT segmentation obtained from 3C U-net, 2C U-nets, and atlas-based methods were 0.850, 0.811, and 0.671, respectively. The correlation between breast density values based on 3C U-net and manual segmentations was 0.974. This value was significantly higher than 0.957 as obtained from 2C U-nets (P < 0.0001, Steiger's Z-test with Bonferoni correction) and 0.938 as obtained from atlas-based method (P = 0.0016).

CONCLUSIONS

In conclusion, we applied a deep-learning method, U-net, for segmenting breast and FGT in MRI in a dataset that includes a variety of MRI protocols and breast densities. Our results showed that U-net-based methods significantly outperformed the existing algorithms and resulted in significantly more accurate breast density computation.

摘要

目的

乳腺磁共振成像(MRI)的各种计算机辅助应用需要对乳腺和纤维腺体组织(FGT)进行自动分割。传统的图像分析和计算机视觉技术,如图集、模板匹配或边缘与表面检测,已被用于解决此任务。然而,这些方法的适用性通常受研究数据集中所使用图像的特征限制,而乳腺MRI除了乳腺形状的可变性外,还因所使用的不同MRI协议而有所不同。所有这些可变性,再加上各种MRI伪影,使得使用传统方法开发一种强大的乳腺和FGT分割方法成为一项具有挑战性的任务。因此,在本研究中,我们研究了一种名为“U-net”的深度学习方法的应用。

材料与方法

我们使用了从我们的科学档案中随机选择的66例乳腺MRI数据集,其中包括五种不同的MRI采集协议以及来自四个乳腺密度类别的乳腺,且分布均衡。为了准备参考分割,我们使用内部开发的工作站对所有图像手动分割乳腺和FGT。我们以两种不同方式试验了U-net在乳腺和FGT分割中的应用。在第一种方法中,遵循传统方法中使用的相同流程,我们训练了两个连续的(2C)U-net:第一个用于在整个MRI体积中分割乳腺,第二个用于在分割出的乳腺内部分割FGT。在第二种方法中,我们使用了一个单类3分类(3C)U-net,它通过将体积分割为三个区域:非乳腺、乳腺内的脂肪和乳腺内的FGT,同时执行这两项任务。为了进行比较,我们将两种现有的已发表方法应用于我们的数据集:基于图谱的方法和基于片层度的方法。我们使用骰子相似系数(DSC)来衡量自动方法相对于手动分割的性能。此外,我们计算了基于手动和自动分割计算出的乳腺密度值之间的皮尔逊相关性。

结果

从3C U-net、2C U-net、基于图谱的方法和基于片层度的方法获得的乳腺分割平均DSC值分别为0.933、0.944、0.863和0.848。从3C U-net、2C U-net和基于图谱的方法获得的FGT分割平均DSC值分别为0.850、0.811和0.671。基于3C U-net和手动分割的乳腺密度值之间的相关性为0.974。该值显著高于从2C U-net获得的0.957(P < 0.0001,经Bonferroni校正的Steiger Z检验)和从基于图谱的方法获得的0.938(P = 0.0016)。

结论

总之,我们将深度学习方法U-net应用于包含各种MRI协议和乳腺密度的数据集的MRI乳腺和FGT分割。我们的结果表明,基于U-net的方法明显优于现有算法,并在乳腺密度计算方面产生了显著更准确的结果。

相似文献

1
Using deep learning to segment breast and fibroglandular tissue in MRI volumes.利用深度学习对磁共振成像(MRI)容积中的乳腺和纤维腺组织进行分割。
Med Phys. 2017 Feb;44(2):533-546. doi: 10.1002/mp.12079.
2
Automated fibroglandular tissue segmentation and volumetric density estimation in breast MRI using an atlas-aided fuzzy C-means method.基于图谱辅助模糊 C 均值法的乳腺 MRI 中纤维腺体组织自动分割及容积密度估测
Med Phys. 2013 Dec;40(12):122302. doi: 10.1118/1.4829496.
3
An investigation of the effect of fat suppression and dimensionality on the accuracy of breast MRI segmentation using U-nets.利用 U-Nets 研究脂肪抑制和维度对乳腺 MRI 分割准确性的影响。
Med Phys. 2019 Mar;46(3):1230-1244. doi: 10.1002/mp.13375. Epub 2019 Feb 4.
4
Automated fibroglandular tissue segmentation in breast MRI using generative adversarial networks.基于生成对抗网络的乳腺 MRI 中纤维腺体组织的自动分割。
Phys Med Biol. 2020 May 19;65(10):105006. doi: 10.1088/1361-6560/ab7e7f.
5
Multi-atlas segmentation of the whole hippocampus and subfields using multiple automatically generated templates.使用多个自动生成的模板对整个海马体及其子区进行多图谱分割。
Neuroimage. 2014 Nov 1;101:494-512. doi: 10.1016/j.neuroimage.2014.04.054. Epub 2014 Apr 29.
6
Development of U-Net Breast Density Segmentation Method for Fat-Sat MR Images Using Transfer Learning Based on Non-Fat-Sat Model.基于非脂肪饱和模型的迁移学习的脂肪饱和磁共振图像 U-Net 乳腺密度分割方法的开发。
J Digit Imaging. 2021 Aug;34(4):877-887. doi: 10.1007/s10278-021-00472-z. Epub 2021 Jul 9.
7
Automatic Breast and Fibroglandular Tissue Segmentation in Breast MRI Using Deep Learning by a Fully-Convolutional Residual Neural Network U-Net.基于深度学习的全卷积残差神经网络 U-Net 在乳腺 MRI 中自动分割乳腺和纤维腺体组织。
Acad Radiol. 2019 Nov;26(11):1526-1535. doi: 10.1016/j.acra.2019.01.012. Epub 2019 Jan 31.
8
Segmentation of whole breast and fibroglandular tissue using nnU-Net in dynamic contrast enhanced MR images.使用 nnU-Net 对动态对比增强磁共振图像中的全乳和纤维腺体组织进行分割。
Magn Reson Imaging. 2021 Oct;82:31-41. doi: 10.1016/j.mri.2021.06.017. Epub 2021 Jun 18.
9
Fully Automatic Assessment of Background Parenchymal Enhancement on Breast MRI Using Machine-Learning Models.基于机器学习模型的乳腺 MRI 背景实质强化的全自动评估
J Magn Reson Imaging. 2021 Mar;53(3):818-826. doi: 10.1002/jmri.27429. Epub 2020 Nov 20.
10
Knowledge-based and deep learning-based automated chest wall segmentation in magnetic resonance images of extremely dense breasts.基于知识和深度学习的磁共振成像中致密型乳房的胸壁自动分割。
Med Phys. 2019 Oct;46(10):4405-4416. doi: 10.1002/mp.13699. Epub 2019 Aug 10.

引用本文的文献

1
Impact of menopause and age on breast density and background parenchymal enhancement in dynamic contrast-enhanced magnetic resonance imaging.绝经和年龄对动态对比增强磁共振成像中乳腺密度和背景实质强化的影响。
J Med Imaging (Bellingham). 2025 Nov;12(Suppl 2):S22002. doi: 10.1117/1.JMI.12.S2.S22002. Epub 2025 Mar 11.
2
Advances in analytical approaches for background parenchymal enhancement in predicting breast tumor response to neoadjuvant chemotherapy: A systematic review.用于预测乳腺肿瘤对新辅助化疗反应的背景实质强化分析方法的进展:一项系统综述。
PLoS One. 2025 Mar 7;20(3):e0317240. doi: 10.1371/journal.pone.0317240. eCollection 2025.
3
Jointly exploring client drift and catastrophic forgetting in dynamic learning.
共同探索动态学习中的客户漂移和灾难性遗忘。
Sci Rep. 2025 Feb 18;15(1):5857. doi: 10.1038/s41598-025-89873-6.
4
Chan-Vese aided fuzzy C-means approach for whole breast and fibroglandular tissue segmentation: Preliminary application to real-world breast MRI.用于全乳腺和纤维腺组织分割的Chan-Vese辅助模糊C均值方法:在实际乳腺MRI中的初步应用
Med Phys. 2025 May;52(5):2950-2960. doi: 10.1002/mp.17660. Epub 2025 Feb 5.
5
Performance of an AI-powered visualization software platform for precision surgery in breast cancer patients.人工智能驱动的可视化软件平台在乳腺癌患者精准手术中的性能
NPJ Breast Cancer. 2024 Nov 14;10(1):98. doi: 10.1038/s41523-024-00696-6.
6
Sentinel Lymph Node Biopsy in Breast Cancer Using Different Types of Tracers According to Molecular Subtypes and Breast Density-A Randomized Clinical Study.根据分子亚型和乳腺密度使用不同类型示踪剂的乳腺癌前哨淋巴结活检——一项随机临床研究
Diagnostics (Basel). 2024 Oct 31;14(21):2439. doi: 10.3390/diagnostics14212439.
7
TopoTxR: A topology-guided deep convolutional network for breast parenchyma learning on DCE-MRIs.TopoTxR:一种基于拓扑的深度卷积网络,用于在 DCE-MRI 上学习乳腺实质。
Med Image Anal. 2025 Jan;99:103373. doi: 10.1016/j.media.2024.103373. Epub 2024 Oct 16.
8
Role of Radiology in the Diagnosis and Treatment of Breast Cancer in Women: A Comprehensive Review.放射学在女性乳腺癌诊断与治疗中的作用:综述
Cureus. 2024 Sep 24;16(9):e70097. doi: 10.7759/cureus.70097. eCollection 2024 Sep.
9
MRI-based Deep Learning Models for Preoperative Breast Volume and Density Assessment Assisting Breast Reconstruction.基于磁共振成像的深度学习模型用于术前乳房体积和密度评估以辅助乳房重建
Aesthetic Plast Surg. 2024 Dec;48(23):4994-5006. doi: 10.1007/s00266-024-04074-2. Epub 2024 May 28.
10
Quantitative assessment of background parenchymal enhancement is associated with lifetime breast cancer risk in screening MRI.定量评估背景实质强化与筛查 MRI 中的乳腺癌终生风险相关。
Eur Radiol. 2024 Oct;34(10):6358-6368. doi: 10.1007/s00330-024-10758-9. Epub 2024 Apr 29.