Suppr超能文献

用于扩散加权乳腺磁共振成像中全乳腺分割的自动化深度学习方法

Automated deep learning method for whole-breast segmentation in diffusion-weighted breast MRI.

作者信息

Zhang Lei, Mohamed Aly A, Chai Ruimei, Guo Yuan, Zheng Bingjie, Wu Shandong

机构信息

Department of Radiology, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania, USA.

Department of Radiology, First Hospital of China Medical University, Heping District, Shenyang, Liaoning, China.

出版信息

J Magn Reson Imaging. 2020 Feb;51(2):635-643. doi: 10.1002/jmri.26860. Epub 2019 Jul 13.

Abstract

BACKGROUND

Diffusion-weighted imaging (DWI) in MRI plays an increasingly important role in diagnostic applications and developing imaging biomarkers. Automated whole-breast segmentation is an important yet challenging step for quantitative breast imaging analysis. While methods have been developed on dynamic contrast-enhanced (DCE) MRI, automatic whole-breast segmentation in breast DWI MRI is still underdeveloped.

PURPOSE

To develop a deep/transfer learning-based segmentation approach for DWI MRI scans and conduct an extensive study assessment on four imaging datasets from both internal and external sources.

STUDY TYPE

Retrospective.

SUBJECTS

In all, 98 patients (144 MRI scans; 11,035 slices) of four different breast MRI datasets from two different institutions.

FIELD STRENGTH/SEQUENCES: 1.5T scanners with DCE sequence (Dataset 1 and Dataset 2) and DWI sequence. A 3.0T scanner with one external DWI sequence.

ASSESSMENT

Deep learning models (UNet and SegNet) and transfer learning were used as segmentation approaches. The main DCE Dataset (4,251 2D slices from 39 patients) was used for pre-training and internal validation, and an unseen DCE Dataset (431 2D slices from 20 patients) was used as an independent test dataset for evaluating the pre-trained DCE models. The main DWI Dataset (6,343 2D slices from 75 MRI scans of 29 patients) was used for transfer learning and internal validation, and an unseen DWI Dataset (10 2D slices from 10 patients) was used for independent evaluation to the fine-tuned models for DWI segmentation. Manual segmentations by three radiologists (>10-year experience) were used to establish the ground truth for assessment. The segmentation performance was measured using the Dice Coefficient (DC) for the agreement between manual expert radiologist's segmentation and algorithm-generated segmentation.

STATISTICAL TESTS

The mean value and standard deviation of the DCs were calculated to compare segmentation results from different deep learning models.

RESULTS

For the segmentation on the DCE MRI, the average DC of the UNet was 0.92 (cross-validation on the main DCE dataset) and 0.87 (external evaluation on the unseen DCE dataset), both higher than the performance of the SegNet. When segmenting the DWI images by the fine-tuned models, the average DC of the UNet was 0.85 (cross-validation on the main DWI dataset) and 0.72 (external evaluation on the unseen DWI dataset), both outperforming the SegNet on the same datasets.

DATA CONCLUSION

The internal and independent tests show that the deep/transfer learning models can achieve promising segmentation effects validated on DWI data from different institutions and scanner types. Our proposed approach may provide an automated toolkit to help computer-aided quantitative analyses of breast DWI images.

LEVEL OF EVIDENCE

3 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2020;51:635-643.

摘要

背景

磁共振成像(MRI)中的扩散加权成像(DWI)在诊断应用和开发成像生物标志物方面发挥着越来越重要的作用。自动全乳腺分割是定量乳腺成像分析的重要但具有挑战性的一步。虽然已经开发了基于动态对比增强(DCE)MRI的方法,但乳腺DWI MRI中的自动全乳腺分割仍未充分发展。

目的

开发一种基于深度/迁移学习的DWI MRI扫描分割方法,并对来自内部和外部的四个成像数据集进行广泛的研究评估。

研究类型

回顾性研究。

受试者

来自两个不同机构的四个不同乳腺MRI数据集的98例患者(144次MRI扫描;11,035层)。

场强/序列:1.5T扫描仪,采用DCE序列(数据集1和数据集2)和DWI序列。一台3.0T扫描仪,采用一个外部DWI序列。

评估

使用深度学习模型(UNet和SegNet)和迁移学习作为分割方法。主要的DCE数据集(来自39例患者的4,251个二维层)用于预训练和内部验证,一个未见过的DCE数据集(来自20例患者的431个二维层)用作独立测试数据集,用于评估预训练的DCE模型。主要的DWI数据集(来自29例患者的75次MRI扫描的6,343个二维层)用于迁移学习和内部验证,一个未见过的DWI数据集(来自10例患者的10个二维层)用于对DWI分割的微调模型进行独立评估。由三位具有超过10年经验的放射科医生进行的手动分割用于建立评估的金标准。使用Dice系数(DC)来衡量分割性能,以评估手动专家放射科医生的分割与算法生成的分割之间的一致性。

统计测试

计算DC的平均值和标准差,以比较不同深度学习模型的分割结果。

结果

对于DCE MRI的分割,UNet的平均DC在主要DCE数据集上的交叉验证为0.92,在未见过的DCE数据集上的外部评估为0.87,均高于SegNet的性能。当使用微调模型分割DWI图像时,UNet的平均DC在主要DWI数据集上的交叉验证为0.85,在未见过的DWI数据集上的外部评估为0.72,在相同数据集上均优于SegNet。

数据结论

内部和独立测试表明深度/迁移学习模型可以在来自不同机构和扫描仪类型的DWI数据上实现有前景的分割效果。我们提出的方法可能提供一个自动化工具包,以帮助对乳腺DWI图像进行计算机辅助定量分析。

证据水平

3 技术效能:2期 《磁共振成像杂志》2020年;51:635 - 643。

相似文献

引用本文的文献

5
The road to breast cancer screening with diffusion MRI.通过扩散磁共振成像进行乳腺癌筛查之路。
Front Oncol. 2023 Feb 21;13:993540. doi: 10.3389/fonc.2023.993540. eCollection 2023.
7
Accurate segmentation of neonatal brain MRI with deep learning.利用深度学习对新生儿脑部磁共振成像进行精确分割。
Front Neuroinform. 2022 Sep 28;16:1006532. doi: 10.3389/fninf.2022.1006532. eCollection 2022.
8
Deep learning in breast imaging.乳腺成像中的深度学习
BJR Open. 2022 May 13;4(1):20210060. doi: 10.1259/bjro.20210060. eCollection 2022.
10
A U-Net Approach to Apical Lesion Segmentation on Panoramic Radiographs.基于 U-Net 的全景片根尖病变分割方法。
Biomed Res Int. 2022 Jan 15;2022:7035367. doi: 10.1155/2022/7035367. eCollection 2022.

本文引用的文献

4
Fine-tuning Convolutional Neural Networks for Biomedical Image Analysis: Actively and Incrementally.用于生物医学图像分析的卷积神经网络微调:主动式与增量式
Proc IEEE Comput Soc Conf Comput Vis Pattern Recognit. 2017 Jul;2017:4761-4772. doi: 10.1109/CVPR.2017.506. Epub 2017 Nov 9.
10
SegNet: A Deep Convolutional Encoder-Decoder Architecture for Image Segmentation.SegNet:一种用于图像分割的深度卷积编解码器架构。
IEEE Trans Pattern Anal Mach Intell. 2017 Dec;39(12):2481-2495. doi: 10.1109/TPAMI.2016.2644615. Epub 2017 Jan 2.

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验