Suppr超能文献

生成对抗网络与卷积神经网络相结合用于亚厘米级肺腺癌的自动分类

Combination of generative adversarial network and convolutional neural network for automatic subcentimeter pulmonary adenocarcinoma classification.

作者信息

Wang Yunpeng, Zhou Lingxiao, Wang Mingming, Shao Cheng, Shi Lili, Yang Shuyi, Zhang Zhiyong, Feng Mingxiang, Shan Fei, Liu Lei

机构信息

Shanghai Public Health Clinical Center and Institutes of Biomedical Sciences, Fudan University, Shanghai, China.

Department of Respiratory Medicine, Zhongshan-Xuhui Hospital, Fudan University, Shanghai, China.

出版信息

Quant Imaging Med Surg. 2020 Jun;10(6):1249-1264. doi: 10.21037/qims-19-982.

Abstract

BACKGROUND

The efficient and accurate diagnosis of pulmonary adenocarcinoma before surgery is of considerable significance to clinicians. Although computed tomography (CT) examinations are widely used in practice, it is still challenging and time-consuming for radiologists to distinguish between different types of subcentimeter pulmonary nodules. Although there have been many deep learning algorithms proposed, their performance largely depends on vast amounts of data, which is difficult to collect in the medical imaging area. Therefore, we propose an automatic classification system for subcentimeter pulmonary adenocarcinoma, combining a convolutional neural network (CNN) and a generative adversarial network (GAN) to optimize clinical decision-making and to provide small dataset algorithm design ideas.

METHODS

A total of 206 nodules with postoperative pathological labels were analyzed. Among them were 30 adenocarcinomas in situ (AISs), 119 minimally invasive adenocarcinomas (MIAs), and 57 invasive adenocarcinomas (IACs). Our system consisted of two parts, a GAN-based image synthesis, and a CNN classification. First, several popular existing GAN techniques were employed to augment the datasets, and comprehensive experiments were conducted to evaluate the quality of the GAN synthesis. Additionally, our classification system processes were based on two-dimensional (2D) nodule-centered CT patches without the need of manual labeling information.

RESULTS

For GAN-based image synthesis, the visual Turing test showed that even radiologists could not tell the GAN-synthesized from the raw images (accuracy: primary radiologist 56%, senior radiologist 65%). For CNN classification, our progressive growing wGAN improved the performance of CNN most effectively (area under the curve =0.83). The experiments indicated that the proposed GAN augmentation method improved the classification accuracy by 23.5% (from 37.0% to 60.5%) and 7.3% (from 53.2% to 60.5%) in comparison with training methods using raw and common augmented images respectively. The performance of this combined GAN and CNN method (accuracy: 60.5%±2.6%) was comparable to the state-of-the-art methods, and our CNN was also more lightweight.

CONCLUSIONS

The experiments revealed that GAN synthesis techniques could effectively alleviate the problem of insufficient data in medical imaging. The proposed GAN plus CNN framework can be generalized for use in building other computer-aided detection (CADx) algorithms and thus assist in diagnosis.

摘要

背景

术前对肺腺癌进行高效、准确的诊断对临床医生具有重要意义。尽管计算机断层扫描(CT)检查在实际中被广泛应用,但对于放射科医生来说,区分不同类型的亚厘米级肺结节仍然具有挑战性且耗时。尽管已经提出了许多深度学习算法,但其性能在很大程度上依赖于大量数据,而在医学影像领域很难收集到这些数据。因此,我们提出了一种用于亚厘米级肺腺癌的自动分类系统,将卷积神经网络(CNN)和生成对抗网络(GAN)相结合,以优化临床决策,并提供小数据集算法设计思路。

方法

共分析了206个具有术后病理标签的结节。其中包括30个原位腺癌(AIS)、119个微浸润腺癌(MIA)和57个浸润性腺癌(IAC)。我们的系统由两部分组成,基于GAN的图像合成和CNN分类。首先,采用几种现有的流行GAN技术来扩充数据集,并进行综合实验以评估GAN合成的质量。此外,我们的分类系统流程基于以二维(2D)结节为中心的CT图像块,无需手动标注信息。

结果

对于基于GAN的图像合成,视觉图灵测试表明,即使是放射科医生也无法区分GAN合成图像和原始图像(准确率:初级放射科医生为56%,高级放射科医生为65%)。对于CNN分类,我们的渐进式增长wGAN最有效地提高了CNN的性能(曲线下面积=0.83)。实验表明,与使用原始图像和普通增强图像的训练方法相比,所提出的GAN增强方法分别将分类准确率提高了23.5%(从37.0%提高到60.5%)和7.3%(从53.2%提高到60.5%)。这种GAN和CNN相结合方法的性能(准确率:60.5%±2.6%)与最先进的方法相当,并且我们的CNN也更轻量级。

结论

实验表明,GAN合成技术可以有效缓解医学影像中数据不足的问题。所提出的GAN加CNN框架可推广用于构建其他计算机辅助检测(CADx)算法,从而辅助诊断。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验