Department of Radiology, University of California, San Diego, 9500 Gilman Drive, La Jolla, CA, 92093, USA.
Department of Ultrasound, Institute of Fundamental Technological Research, Polish Academy of Sciences, Pawinskiego 5B, 02-106, Warsaw, Poland.
Med Phys. 2019 Feb;46(2):746-755. doi: 10.1002/mp.13361. Epub 2019 Jan 16.
We propose a deep learning-based approach to breast mass classification in sonography and compare it with the assessment of four experienced radiologists employing breast imaging reporting and data system 4th edition lexicon and assessment protocol.
Several transfer learning techniques are employed to develop classifiers based on a set of 882 ultrasound images of breast masses. Additionally, we introduce the concept of a matching layer. The aim of this layer is to rescale pixel intensities of the grayscale ultrasound images and convert those images to red, green, blue (RGB) to more efficiently utilize the discriminative power of the convolutional neural network pretrained on the ImageNet dataset. We present how this conversion can be determined during fine-tuning using back-propagation. Next, we compare the performance of the transfer learning techniques with and without the color conversion. To show the usefulness of our approach, we additionally evaluate it using two publicly available datasets.
Color conversion increased the areas under the receiver operating curve for each transfer learning method. For the better-performing approach utilizing the fine-tuning and the matching layer, the area under the curve was equal to 0.936 on a test set of 150 cases. The areas under the curves for the radiologists reading the same set of cases ranged from 0.806 to 0.882. In the case of the two separate datasets, utilizing the proposed approach we achieved areas under the curve of around 0.890.
The concept of the matching layer is generalizable and can be used to improve the overall performance of the transfer learning techniques using deep convolutional neural networks. When fully developed as a clinical tool, the methods proposed in this paper have the potential to help radiologists with breast mass classification in ultrasound.
我们提出了一种基于深度学习的超声乳腺肿块分类方法,并与四位具有丰富经验的放射科医生使用乳腺影像报告和数据系统第四版词汇和评估方案进行了比较。
采用多种迁移学习技术,基于 882 个乳腺肿块超声图像集开发分类器。此外,我们引入了匹配层的概念。该层的目的是重新调整灰度超声图像的像素强度,并将这些图像转换为红、绿、蓝(RGB),以更有效地利用在 ImageNet 数据集上预训练的卷积神经网络的判别能力。我们展示了如何在微调过程中使用反向传播来确定这种转换。接下来,我们比较了有无颜色转换的迁移学习技术的性能。为了展示我们方法的有用性,我们还使用了两个公开可用的数据集进行了评估。
颜色转换提高了每种迁移学习方法的接收器工作曲线下面积。对于表现更好的方法,利用微调技术和匹配层,在 150 例测试集中,曲线下面积等于 0.936。阅读同一组病例的放射科医生的曲线下面积范围为 0.806 至 0.882。在两个单独数据集的情况下,使用提出的方法,我们获得了约 0.890 的曲线下面积。
匹配层的概念具有通用性,可以用于提高使用深度卷积神经网络的迁移学习技术的整体性能。当作为临床工具完全开发后,本文提出的方法有可能帮助放射科医生进行超声乳腺肿块分类。