Departmentof Electronic Engineering, Fudan University, Shanghai, 200433, China.
Key Laboratory of Medical Imaging Computing and Computer Assisted Intervention of Shanghai, Shanghai, 200433, China.
Med Phys. 2019 Jan;46(1):215-228. doi: 10.1002/mp.13268. Epub 2018 Nov 28.
Due to the low contrast, blurry boundaries, and large amount of shadows in breast ultrasound (BUS) images, automatic tumor segmentation remains a challenging task. Deep learning provides a solution to this problem, since it can effectively extract representative features from lesions and the background in BUS images.
A novel automatic tumor segmentation method is proposed by combining a dilated fully convolutional network (DFCN) with a phase-based active contour (PBAC) model. The DFCN is an improved fully convolutional neural network with dilated convolution in deeper layers, fewer parameters, and batch normalization techniques; and has a large receptive field that can separate tumors from background. The predictions made by the DFCN are relatively rough due to blurry boundaries and variations in tumor sizes; thus, the PBAC model, which adds both region-based and phase-based energy functions, is applied to further improve segmentation results. The DFCN model is trained and tested in dataset 1 which contains 570 BUS images from 89 patients. In dataset 2, a 10-fold support vector machine (SVM) classifier is employed to verify the diagnostic ability using 460 features extracted from the segmentation results of the proposed method.
Advantages of the present method were compared with three state-of-the-art networks; the FCN-8s, U-net, and dilated residual network (DRN). Experimental results from 170 BUS images show that the proposed method had a Dice Similarity coefficient of 88.97 ± 10.01%, a Hausdorff distance (HD) of 35.54 ± 29.70 pixels, and a mean absolute deviation (MAD) of 7.67 ± 6.67 pixels, which showed the best segmentation performance. In dataset 2, the area under curve (AUC) of the 10-fold SVM classifier was 0.795 which is similar to the classification using the manual segmentation results.
The proposed automatic method may be sufficiently accurate, robust, and efficient for medical ultrasound applications.
由于乳腺超声(BUS)图像对比度低、边界模糊且存在大量阴影,因此自动肿瘤分割仍然是一项具有挑战性的任务。深度学习为解决这个问题提供了一种解决方案,因为它可以有效地从 BUS 图像中的病变和背景中提取有代表性的特征。
提出了一种新的自动肿瘤分割方法,该方法将扩张全卷积网络(DFCN)与基于相位的主动轮廓(PBAC)模型相结合。DFCN 是一种改进的全卷积神经网络,在更深的层中使用扩张卷积、更少的参数和批量归一化技术;具有较大的感受野,可以将肿瘤与背景分开。由于边界模糊和肿瘤大小变化,DFCN 的预测结果相对粗糙;因此,应用了添加基于区域和基于相位的能量函数的 PBAC 模型来进一步提高分割结果。DFCN 模型在包含 89 名患者的 570 个 BUS 图像的数据集 1 中进行训练和测试。在数据集 2 中,使用从所提出方法的分割结果中提取的 460 个特征,采用 10 倍支持向量机(SVM)分类器来验证诊断能力。
将本方法的优势与三种最先进的网络(FCN-8s、U-net 和扩张残差网络(DRN))进行了比较。来自 170 个 BUS 图像的实验结果表明,所提出的方法的 Dice 相似系数为 88.97±10.01%,Hausdorff 距离(HD)为 35.54±29.70 像素,平均绝对偏差(MAD)为 7.67±6.67 像素,表现出最好的分割性能。在数据集 2 中,10 倍 SVM 分类器的曲线下面积(AUC)为 0.795,与手动分割结果的分类相似。
所提出的自动方法对于医学超声应用可能足够准确、稳健且高效。