Zhang Lei, Mohamed Aly A, Chai Ruimei, Guo Yuan, Zheng Bingjie, Wu Shandong
Department of Radiology, University of Pittsburgh School of Medicine, Pittsburgh, Pennsylvania, USA.
Department of Radiology, First Hospital of China Medical University, Heping District, Shenyang, Liaoning, China.
J Magn Reson Imaging. 2020 Feb;51(2):635-643. doi: 10.1002/jmri.26860. Epub 2019 Jul 13.
Diffusion-weighted imaging (DWI) in MRI plays an increasingly important role in diagnostic applications and developing imaging biomarkers. Automated whole-breast segmentation is an important yet challenging step for quantitative breast imaging analysis. While methods have been developed on dynamic contrast-enhanced (DCE) MRI, automatic whole-breast segmentation in breast DWI MRI is still underdeveloped.
To develop a deep/transfer learning-based segmentation approach for DWI MRI scans and conduct an extensive study assessment on four imaging datasets from both internal and external sources.
Retrospective.
In all, 98 patients (144 MRI scans; 11,035 slices) of four different breast MRI datasets from two different institutions.
FIELD STRENGTH/SEQUENCES: 1.5T scanners with DCE sequence (Dataset 1 and Dataset 2) and DWI sequence. A 3.0T scanner with one external DWI sequence.
Deep learning models (UNet and SegNet) and transfer learning were used as segmentation approaches. The main DCE Dataset (4,251 2D slices from 39 patients) was used for pre-training and internal validation, and an unseen DCE Dataset (431 2D slices from 20 patients) was used as an independent test dataset for evaluating the pre-trained DCE models. The main DWI Dataset (6,343 2D slices from 75 MRI scans of 29 patients) was used for transfer learning and internal validation, and an unseen DWI Dataset (10 2D slices from 10 patients) was used for independent evaluation to the fine-tuned models for DWI segmentation. Manual segmentations by three radiologists (>10-year experience) were used to establish the ground truth for assessment. The segmentation performance was measured using the Dice Coefficient (DC) for the agreement between manual expert radiologist's segmentation and algorithm-generated segmentation.
The mean value and standard deviation of the DCs were calculated to compare segmentation results from different deep learning models.
For the segmentation on the DCE MRI, the average DC of the UNet was 0.92 (cross-validation on the main DCE dataset) and 0.87 (external evaluation on the unseen DCE dataset), both higher than the performance of the SegNet. When segmenting the DWI images by the fine-tuned models, the average DC of the UNet was 0.85 (cross-validation on the main DWI dataset) and 0.72 (external evaluation on the unseen DWI dataset), both outperforming the SegNet on the same datasets.
The internal and independent tests show that the deep/transfer learning models can achieve promising segmentation effects validated on DWI data from different institutions and scanner types. Our proposed approach may provide an automated toolkit to help computer-aided quantitative analyses of breast DWI images.
3 Technical Efficacy: Stage 2 J. Magn. Reson. Imaging 2020;51:635-643.
磁共振成像(MRI)中的扩散加权成像(DWI)在诊断应用和开发成像生物标志物方面发挥着越来越重要的作用。自动全乳腺分割是定量乳腺成像分析的重要但具有挑战性的一步。虽然已经开发了基于动态对比增强(DCE)MRI的方法,但乳腺DWI MRI中的自动全乳腺分割仍未充分发展。
开发一种基于深度/迁移学习的DWI MRI扫描分割方法,并对来自内部和外部的四个成像数据集进行广泛的研究评估。
回顾性研究。
来自两个不同机构的四个不同乳腺MRI数据集的98例患者(144次MRI扫描;11,035层)。
场强/序列:1.5T扫描仪,采用DCE序列(数据集1和数据集2)和DWI序列。一台3.0T扫描仪,采用一个外部DWI序列。
使用深度学习模型(UNet和SegNet)和迁移学习作为分割方法。主要的DCE数据集(来自39例患者的4,251个二维层)用于预训练和内部验证,一个未见过的DCE数据集(来自20例患者的431个二维层)用作独立测试数据集,用于评估预训练的DCE模型。主要的DWI数据集(来自29例患者的75次MRI扫描的6,343个二维层)用于迁移学习和内部验证,一个未见过的DWI数据集(来自10例患者的10个二维层)用于对DWI分割的微调模型进行独立评估。由三位具有超过10年经验的放射科医生进行的手动分割用于建立评估的金标准。使用Dice系数(DC)来衡量分割性能,以评估手动专家放射科医生的分割与算法生成的分割之间的一致性。
计算DC的平均值和标准差,以比较不同深度学习模型的分割结果。
对于DCE MRI的分割,UNet的平均DC在主要DCE数据集上的交叉验证为0.92,在未见过的DCE数据集上的外部评估为0.87,均高于SegNet的性能。当使用微调模型分割DWI图像时,UNet的平均DC在主要DWI数据集上的交叉验证为0.85,在未见过的DWI数据集上的外部评估为0.72,在相同数据集上均优于SegNet。
内部和独立测试表明深度/迁移学习模型可以在来自不同机构和扫描仪类型的DWI数据上实现有前景的分割效果。我们提出的方法可能提供一个自动化工具包,以帮助对乳腺DWI图像进行计算机辅助定量分析。
3 技术效能:2期 《磁共振成像杂志》2020年;51:635 - 643。