Department of Radiation Oncology and Winship Cancer Institute, Emory University, Atlanta, GA, 30322, USA.
Cancer Hospital of the University of Chinese Academy of Sciences, Zhejiang Cancer Hospital.
Med Phys. 2021 Jan;48(1):204-214. doi: 10.1002/mp.14569. Epub 2020 Nov 18.
Automatic breast ultrasound (ABUS) imaging has become an essential tool in breast cancer diagnosis since it provides complementary information to other imaging modalities. Lesion segmentation on ABUS is a prerequisite step of breast cancer computer-aided diagnosis (CAD). This work aims to develop a deep learning-based method for breast tumor segmentation using three-dimensional (3D) ABUS automatically.
For breast tumor segmentation in ABUS, we developed a Mask scoring region-based convolutional neural network (R-CNN) that consists of five subnetworks, that is, a backbone, a regional proposal network, a region convolutional neural network head, a mask head, and a mask score head. A network block building direct correlation between mask quality and region class was integrated into a Mask scoring R-CNN based framework for the segmentation of new ABUS images with ambiguous regions of interest (ROIs). For segmentation accuracy evaluation, we retrospectively investigated 70 patients with breast tumor confirmed with needle biopsy and manually delineated on ABUS, of which 40 were used for fivefold cross-validation and 30 were used for hold-out test. The comparison between the automatic breast tumor segmentations and the manual contours was quantified by I) six metrics including Dice similarity coefficient (DSC), Jaccard index, 95% Hausdorff distance (HD95), mean surface distance (MSD), residual mean square distance (RMSD), and center of mass distance (CMD); II) Pearson correlation analysis and Bland-Altman analysis.
The mean (median) DSC was 85% ± 10.4% (89.4%) and 82.1% ± 14.5% (85.6%) for cross-validation and hold-out test, respectively. The corresponding HD95, MSD, RMSD, and CMD of the two tests was 1.646 ± 1.191 and 1.665 ± 1.129 mm, 0.489 ± 0.406 and 0.475 ± 0.371 mm, 0.755 ± 0.755 and 0.751 ± 0.508 mm, and 0.672 ± 0.612 and 0.665 ± 0.729 mm. The mean volumetric difference (mean and ± 1.96 standard deviation) was 0.47 cc ([-0.77, 1.71)) for the cross-validation and 0.23 cc ([-0.23 0.69]) for hold-out test, respectively.
We developed a novel Mask scoring R-CNN approach for the automated segmentation of the breast tumor in ABUS images and demonstrated its accuracy for breast tumor segmentation. Our learning-based method can potentially assist the clinical CAD of breast cancer using 3D ABUS imaging.
自动乳腺超声(ABUS)成像已成为乳腺癌诊断的重要工具,因为它为其他成像方式提供了补充信息。ABUS 上的病变分割是乳腺癌计算机辅助诊断(CAD)的前提步骤。本研究旨在开发一种基于深度学习的方法,用于自动对三维 ABUS 中的乳腺肿瘤进行分割。
为了在 ABUS 中对乳腺肿瘤进行分割,我们开发了一种基于掩模评分的区域卷积神经网络(R-CNN),该网络由五个子网组成,即骨干网、区域提议网络、区域卷积神经网络头、掩模头和掩模评分头。我们将网络块构建的掩模质量与区域类别之间的直接相关性集成到基于掩模评分的 R-CNN 框架中,以对具有不明确感兴趣区域(ROI)的新 ABUS 图像进行分割。为了评估分割准确性,我们回顾性地研究了 70 名经活检证实的乳腺肿瘤患者,这些患者在 ABUS 上进行了手动勾画,其中 40 名用于五折交叉验证,30 名用于留一测试。自动乳腺肿瘤分割与手动轮廓之间的比较通过以下指标进行量化:I)包括 Dice 相似系数(DSC)、Jaccard 指数、95%Hausdorff 距离(HD95)、平均表面距离(MSD)、残差均方距离(RMSD)和质心距离(CMD)在内的 6 项指标;II)Pearson 相关分析和 Bland-Altman 分析。
交叉验证和留一测试的平均(中位数)DSC 分别为 85%±10.4%(89.4%)和 82.1%±14.5%(85.6%)。两次测试的相应 HD95、MSD、RMSD 和 CMD 分别为 1.646±1.191 和 1.665±1.129mm、0.489±0.406 和 0.475±0.371mm、0.755±0.755 和 0.751±0.508mm、0.672±0.612 和 0.665±0.729mm。交叉验证的平均体积差异(均值和±1.96 标准差)为 0.47cc([-0.77,1.71]),留一测试的平均体积差异为 0.23cc([-0.23,0.69])。
我们开发了一种新的基于掩模评分的 R-CNN 方法,用于自动分割 ABUS 图像中的乳腺肿瘤,并证明了其在乳腺肿瘤分割中的准确性。我们的基于学习的方法有可能通过使用三维 ABUS 成像来辅助临床 CAD 中的乳腺癌。