School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China; Department of Radiology and Biomedical Research Imaging Center (BRIC), University of North Carolina, Chapel Hill, NC, 27599, USA.
School of Electronic and Information Engineering, Beijing Jiaotong University, Beijing 100044, China.
Med Image Anal. 2021 May;70:101918. doi: 10.1016/j.media.2020.101918. Epub 2020 Nov 28.
Tumor classification and segmentation are two important tasks for computer-aided diagnosis (CAD) using 3D automated breast ultrasound (ABUS) images. However, they are challenging due to the significant shape variation of breast tumors and the fuzzy nature of ultrasound images (e.g., low contrast and signal to noise ratio). Considering the correlation between tumor classification and segmentation, we argue that learning these two tasks jointly is able to improve the outcomes of both tasks. In this paper, we propose a novel multi-task learning framework for joint segmentation and classification of tumors in ABUS images. The proposed framework consists of two sub-networks: an encoder-decoder network for segmentation and a light-weight multi-scale network for classification. To account for the fuzzy boundaries of tumors in ABUS images, our framework uses an iterative training strategy to refine feature maps with the help of probability maps obtained from previous iterations. Experimental results based on a clinical dataset of 170 3D ABUS volumes collected from 107 patients indicate that the proposed multi-task framework improves tumor segmentation and classification over the single-task learning counterparts.
肿瘤分类和分割是使用三维自动化乳腺超声(ABUS)图像进行计算机辅助诊断(CAD)的两个重要任务。然而,由于乳腺肿瘤的形状变化很大,以及超声图像的模糊性质(例如,对比度和信噪比低),这些任务具有挑战性。考虑到肿瘤分类和分割之间的相关性,我们认为联合学习这两个任务能够提高这两个任务的结果。在本文中,我们提出了一种用于 ABUS 图像中肿瘤联合分割和分类的新型多任务学习框架。所提出的框架由两个子网络组成:用于分割的编码器-解码器网络和用于分类的轻量级多尺度网络。为了解决 ABUS 图像中肿瘤边界模糊的问题,我们的框架使用迭代训练策略,借助前几次迭代中获得的概率图来细化特征图。基于从 107 名患者中收集的 170 个 3D ABUS 容积的临床数据集的实验结果表明,所提出的多任务框架优于单任务学习方法,提高了肿瘤的分割和分类效果。