Xie Juanying, Liu Ran, Luttrell Joseph, Zhang Chaoyang
School of Computer Science, Shaanxi Normal University, Xi'an, China.
School of Computing Sciences and Computer Engineering, University of Southern Mississippi, Hattiesburg, MS, United States.
Front Genet. 2019 Feb 19;10:80. doi: 10.3389/fgene.2019.00080. eCollection 2019.
Breast cancer is associated with the highest morbidity rates for cancer diagnoses in the world and has become a major public health issue. Early diagnosis can increase the chance of successful treatment and survival. However, it is a very challenging and time-consuming task that relies on the experience of pathologists. The automatic diagnosis of breast cancer by analyzing histopathological images plays a significant role for patients and their prognosis. However, traditional feature extraction methods can only extract some low-level features of images, and prior knowledge is necessary to select useful features, which can be greatly affected by humans. Deep learning techniques can extract high-level abstract features from images automatically. Therefore, we introduce it to analyze histopathological images of breast cancer via supervised and unsupervised deep convolutional neural networks. First, we adapted Inception_V3 and Inception_ResNet_V2 architectures to the binary and multi-class issues of breast cancer histopathological image classification by utilizing transfer learning techniques. Then, to overcome the influence from the imbalanced histopathological images in subclasses, we balanced the subclasses with Ductal Carcinoma as the baseline by turning images up and down, right and left, and rotating them counterclockwise by 90 and 180 degrees. Our experimental results of the supervised histopathological image classification of breast cancer and the comparison to the results from other studies demonstrate that Inception_V3 and Inception_ResNet_V2 based histopathological image classification of breast cancer is superior to the existing methods. Furthermore, these findings show that Inception_ResNet_V2 network is the best deep learning architecture so far for diagnosing breast cancers by analyzing histopathological images. Therefore, we used Inception_ResNet_V2 to extract features from breast cancer histopathological images to perform unsupervised analysis of the images. We also constructed a new autoencoder network to transform the features extracted by Inception_ResNet_V2 to a low dimensional space to do clustering analysis of the images. The experimental results demonstrate that using our proposed autoencoder network results in better clustering results than those based on features extracted only by Inception_ResNet_V2 network. All of our experimental results demonstrate that Inception_ResNet_V2 network based deep transfer learning provides a new means of performing analysis of histopathological images of breast cancer.
乳腺癌是全球癌症诊断中发病率最高的疾病,已成为一个重大的公共卫生问题。早期诊断可以增加成功治疗和存活的机会。然而,这是一项极具挑战性且耗时的任务,依赖于病理学家的经验。通过分析组织病理学图像实现乳腺癌的自动诊断对患者及其预后具有重要意义。然而,传统的特征提取方法只能提取图像的一些低级特征,并且需要先验知识来选择有用的特征,这会受到人为因素的极大影响。深度学习技术可以自动从图像中提取高级抽象特征。因此,我们引入深度学习技术,通过有监督和无监督的深度卷积神经网络来分析乳腺癌的组织病理学图像。首先,我们利用迁移学习技术,将Inception_V3和Inception_ResNet_V2架构应用于乳腺癌组织病理学图像分类的二分类和多分类问题。然后,为了克服子类中组织病理学图像不平衡的影响,我们以导管癌为基线,通过上下翻转、左右翻转以及逆时针旋转90度和180度来平衡子类。我们对乳腺癌组织病理学图像进行有监督分类的实验结果以及与其他研究结果的比较表明,基于Inception_V3和Inception_ResNet_V2的乳腺癌组织病理学图像分类优于现有方法。此外,这些结果表明,Inception_ResNet_V2网络是目前通过分析组织病理学图像诊断乳腺癌的最佳深度学习架构。因此,我们使用Inception_ResNet_V2从乳腺癌组织病理学图像中提取特征以对图像进行无监督分析。我们还构建了一个新的自动编码器网络,将Inception_ResNet_V2提取的特征转换到低维空间以对图像进行聚类分析。实验结果表明,使用我们提出的自动编码器网络比仅基于Inception_ResNet_V2网络提取的特征能得到更好的聚类结果。我们所有的实验结果表明,基于Inception_ResNet_V2网络的深度迁移学习为乳腺癌组织病理学图像分析提供了一种新方法。