School of Instrumentation and Optoelectronics Engineering, Beihang University, Beijing, 100191, China.
Beijing Advanced Innovation Center for Big Data-Based Precision Medicine, Beihang University, Beijing, 100191, China.
Med Phys. 2020 Nov;47(11):5702-5714. doi: 10.1002/mp.14470. Epub 2020 Oct 6.
Breast cancer is the most common cancer among women worldwide. Medical ultrasound imaging is one of the widely applied breast imaging methods for breast tumors. Automatic breast ultrasound (BUS) image segmentation can measure the size of tumors objectively. However, various ultrasound artifacts hinder segmentation. We proposed an attention-supervised full-resolution residual network (ASFRRN) to segment tumors from BUS images.
In the proposed method, Global Attention Upsample (GAU) and deep supervision were introduced into a full-resolution residual network (FRRN), where GAU learns to merge features at different levels with attention for deep supervision. Two datasets were employed for evaluation. One (Dataset A) consisted of 163 BUS images with tumors (53 malignant and 110 benign) from UDIAT Centre Diagnostic, and the other (Dataset B) included 980 BUS images with tumors (595 malignant and 385 benign) from the Sun Yat-sen University Cancer Center. The tumors from both datasets were manually segmented by medical doctors. For evaluation, the Dice coefficient (Dice), Jaccard similarity coefficient (JSC), and F1 score were calculated.
For Dataset A, the proposed method achieved higher Dice (84.3 10.0%), JSC (75.2 10.7%), and F1 score (84.3 10.0%) than the previous best method: FRRN. For Dataset B, the proposed method also achieved higher Dice (90.7 13.0%), JSC (83.7 14.8%), and F1 score (90.7 13.0%) than the previous best methods: DeepLabv3 and dual attention network (DANet). For Dataset A + B, the proposed method achieved higher Dice (90.5 13.1%), JSC (83.3 14.8%), and F1 score (90.5 13.1%) than the previous best method: DeepLabv3. Additionally, the parameter number of ASFRRN was only 10.6 M, which is less than those of DANet (71.4 M) and DeepLabv3 (41.3 M).
We proposed ASFRRN, which combined with FRRN, attention mechanism, and deep supervision to segment tumors from BUS images. It achieved high segmentation accuracy with a reduced parameter number.
乳腺癌是全球女性中最常见的癌症。医学超声成像是广泛应用于乳腺肿瘤的乳腺成像方法之一。自动乳腺超声(BUS)图像分割可以客观地测量肿瘤的大小。然而,各种超声伪影会阻碍分割。我们提出了一种注意力监督全分辨率残差网络(ASFRRN)来分割 BUS 图像中的肿瘤。
在提出的方法中,全局注意力上采样(GAU)和深度监督被引入到全分辨率残差网络(FRRN)中,其中 GAU 学习使用注意力融合不同层次的特征,以进行深度监督。使用两个数据集进行评估。一个(数据集 A)由 UDIAT 中心诊断的 163 个包含肿瘤的 BUS 图像组成(53 个恶性和 110 个良性),另一个(数据集 B)包含 980 个来自中山大学肿瘤中心的包含肿瘤的 BUS 图像(595 个恶性和 385 个良性)。两个数据集的肿瘤均由医生手动分割。为了评估,计算了 Dice 系数(Dice)、Jaccard 相似系数(JSC)和 F1 评分。
对于数据集 A,与之前的最佳方法 FRRN 相比,所提出的方法获得了更高的 Dice(84.3 10.0%)、JSC(75.2 10.7%)和 F1 评分(84.3 10.0%)。对于数据集 B,与之前的最佳方法 DeepLabv3 和双注意力网络(DANet)相比,所提出的方法还获得了更高的 Dice(90.7 13.0%)、JSC(83.7 14.8%)和 F1 评分(90.7 13.0%)。对于数据集 A+B,与之前的最佳方法 DeepLabv3 相比,所提出的方法获得了更高的 Dice(90.5 13.1%)、JSC(83.3 14.8%)和 F1 评分(90.5 13.1%)。此外,ASFRRN 的参数数量仅为 10.6M,小于 DANet(71.4M)和 DeepLabv3(41.3M)的参数数量。
我们提出了 ASFRRN,它结合了 FRRN、注意力机制和深度监督,用于从 BUS 图像中分割肿瘤。它具有较高的分割精度和较少的参数数量。