Department of Radiology, Seoul St. Mary's Hospital, College of Medicine, The Catholic University of Korea, Seoul, Republic of Korea.
Division of Biomedical Engineering, Hankuk University of Foreign Studies, Yongin, Republic of Korea.
J Magn Reson Imaging. 2021 Mar;53(3):818-826. doi: 10.1002/jmri.27429. Epub 2020 Nov 20.
BACKGROUND: Automated measurement and classification models with objectivity and reproducibility are required for accurate evaluation of the breast cancer risk of fibroglandular tissue (FGT) and background parenchymal enhancement (BPE). PURPOSE: To develop and evaluate a machine-learning algorithm for breast FGT segmentation and BPE classification. STUDY TYPE: Retrospective. POPULATION: A total of 794 patients with breast cancer, 594 patients assigned to the development set, and 200 patients to the test set. FIELD STRENGTH/SEQUENCE: 3T and 1.5T; T -weighted, fat-saturated T -weighted (T W) with dynamic contrast enhancement (DCE). ASSESSMENT: Manual segmentation was performed for the whole breast and FGT regions in the contralateral breast. The BPE region was determined by thresholding using the subtraction of the pre- and postcontrast T W images and the segmented FGT mask. Two radiologists independently assessed the categories of FGT and BPE. A deep-learning-based algorithm was designed to segment and measure the volume of whole breast and FGT and classify the grade of BPE. STATISTICAL TESTS: Dice similarity coefficients (DSC) and Spearman correlation analysis were used to compare the volumes from the manual and deep-learning-based segmentations. Kappa statistics were used for agreement analysis. Comparison of area under the receiver operating characteristic (ROC) curves (AUC) and F1 scores were calculated to evaluate the performance of BPE classification. RESULTS: The mean (±SD) DSC for manual and deep-learning segmentations was 0.85 ± 0.11. The correlation coefficient for FGT volume from manual- and deep-learning-based segmentations was 0.93. Overall accuracy of manual segmentation and deep-learning segmentation in BPE classification task was 66% and 67%, respectively. For binary categorization of BPE grade (minimal/mild vs. moderate/marked), overall accuracy increased to 91.5% in manual segmentation and 90.5% in deep-learning segmentation; the AUC was 0.93 in both methods. DATA CONCLUSION: This deep-learning-based algorithm can provide reliable segmentation and classification results for BPE. LEVEL OF EVIDENCE: 3 TECHNICAL EFFICACY STAGE: 2.
背景:为了准确评估纤维腺体组织(FGT)和背景实质增强(BPE)的乳腺癌风险,需要具有客观性和可重复性的自动测量和分类模型。
目的:开发和评估用于乳腺 FGT 分割和 BPE 分类的机器学习算法。
研究类型:回顾性。
人群:共有 794 名乳腺癌患者,其中 594 名患者被分配到开发集,200 名患者到测试集。
磁场强度/序列:3T 和 1.5T;T1 加权像、脂肪饱和 T1 加权像(T1W)加动态对比增强(DCE)。
评估:对双侧乳房的整个乳房和 FGT 区域进行手动分割。通过使用减影法(减去预对比和后对比 T1W 图像以及分割的 FGT 掩模)确定 BPE 区域。两名放射科医生独立评估 FGT 和 BPE 的类别。设计了一种基于深度学习的算法来分割和测量整个乳房和 FGT 的体积,并对 BPE 分级进行分类。
统计检验:使用 Dice 相似系数(DSC)和 Spearman 相关分析比较手动和基于深度学习的分割的体积。使用 Kappa 统计分析评估一致性。计算接收者操作特征(ROC)曲线下面积(AUC)和 F1 分数的比较,以评估 BPE 分类的性能。
结果:手动和基于深度学习的分割的平均(±SD)DSC 分别为 0.85 ± 0.11。手动和基于深度学习的分割的 FGT 体积的相关系数为 0.93。BPE 分类任务中手动分割和深度学习分割的总体准确率分别为 66%和 67%。对于 BPE 分级(最小/轻度与中度/明显)的二分类,手动分割和深度学习分割的总体准确率分别提高到 91.5%和 90.5%;两种方法的 AUC 均为 0.93。
数据结论:这种基于深度学习的算法可以为 BPE 提供可靠的分割和分类结果。
证据水平:3 技术功效分期:2。
J Magn Reson Imaging. 2021-3
J Magn Reson Imaging. 2022-10
Quant Imaging Med Surg. 2023-12-1
J Magn Reson Imaging. 2022-10