Faculty of Radiological Technology, School of Medical Sciences, Fujita Health University, Toyoake, Japan.
School of Medicine, Fujita Health University, Toyoake, Japan.
PLoS One. 2020 Mar 5;15(3):e0229951. doi: 10.1371/journal.pone.0229951. eCollection 2020.
Cytology is the first pathological examination performed in the diagnosis of lung cancer. In our previous study, we introduced a deep convolutional neural network (DCNN) to automatically classify cytological images as images with benign or malignant features and achieved an accuracy of 81.0%. To further improve the DCNN's performance, it is necessary to train the network using more images. However, it is difficult to acquire cell images which contain a various cytological features with the use of many manual operations with a microscope. Therefore, in this study, we aim to improve the classification accuracy of a DCNN with the use of actual and synthesized cytological images with a generative adversarial network (GAN). Based on the proposed method, patch images were obtained from a microscopy image. Accordingly, these generated many additional similar images using a GAN. In this study, we introduce progressive growing of GANs (PGGAN), which enables the generation of high-resolution images. The use of these images allowed us to pretrain a DCNN. The DCNN was then fine-tuned using actual patch images. To confirm the effectiveness of the proposed method, we first evaluated the quality of the images which were generated by PGGAN and by a conventional deep convolutional GAN. We then evaluated the classification performance of benign and malignant cells, and confirmed that the generated images had characteristics similar to those of the actual images. Accordingly, we determined that the overall classification accuracy of lung cells was 85.3% which was improved by approximately 4.3% compared to a previously conducted study without pretraining using GAN-generated images. Based on these results, we confirmed that our proposed method will be effective for the classification of cytological images in cases at which only limited data are acquired.
细胞学是肺癌诊断中进行的首次病理学检查。在我们之前的研究中,我们引入了深度卷积神经网络(DCNN)来自动对细胞学图像进行分类,分为良性或恶性特征的图像,并达到了 81.0%的准确率。为了进一步提高 DCNN 的性能,需要使用更多的图像来训练网络。但是,使用显微镜进行许多手动操作很难获取包含各种细胞学特征的细胞图像。因此,在本研究中,我们旨在使用生成对抗网络(GAN)的实际和合成细胞学图像来提高 DCNN 的分类准确性。基于所提出的方法,从显微镜图像中获取了小块图像。相应地,GAN 使用这些图像生成了许多其他类似的图像。在本研究中,我们引入了渐进式增长的 GAN(PGGAN),它可以生成高分辨率的图像。使用这些图像可以预训练 DCNN。然后,使用实际的小块图像对 DCNN 进行微调。为了确认所提出的方法的有效性,我们首先评估了由 PGGAN 和传统的深度卷积 GAN 生成的图像的质量。然后,我们评估了良性和恶性细胞的分类性能,并确认了生成的图像具有与实际图像相似的特征。因此,我们确定良性肺细胞的整体分类准确率为 85.3%,与之前未使用 GAN 生成图像进行预训练的研究相比,提高了约 4.3%。基于这些结果,我们确认我们提出的方法将对仅获取有限数据的细胞学图像的分类有效。