Shi Guohua, Wang Jiawen, Qiang Yan, Yang Xiaotang, Zhao Juanjuan, Hao Rui, Yang Wenkai, Du Qianqian, Kazihise Ntikurako Guy-Fernand
College of Information and Computer, Taiyuan University of Technology, Taiyuan, China.
College of Information and Computer, Taiyuan University of Technology, Taiyuan, China.
Comput Methods Programs Biomed. 2020 Nov;196:105611. doi: 10.1016/j.cmpb.2020.105611. Epub 2020 Jun 30.
Image classification is an important task in many medical applications. Methods based on deep learning have made great achievements in the computer vision domain. However, they typically rely on large-scale datasets which are annotated. How to obtain such great datasets is still a serious problem in medical domain.
In this paper, we propose a knowledge-guided adversarial augmentation method for synthesizing medical images. First, we design Term and Image Encoders to extract domain knowledge from radiologists, then we use domain knowledge as novel condition to constrain the Auxiliary Classifier Generative Adversarial Network (ACGAN) framework for the synthesis of high-quality thyroid nodule images. Finally, we demonstrate our method on the task of classifying ultrasonography thyroid nodule. Our method can make effective use of the high-quality diagnostic experience of advanced radiologists. In addition, we creatively choose to extract domain knowledge from standardized terms rather than ultrasound images.
Our novel method is demonstrated on a limited dataset of 1937 clinical thyroid ultrasound images and corresponding standardized terms. The accuracy of the proposed model for thyroid nodules is 91.46%, the sensitivity is 90.63%, the specificity is 92.65%, and the AUC is 95.32%, which is better than the current classification methods for thyroid nodules. The experimental results show the model has better generalization and robustness.
We believe that the proposed method can alleviate the problem of insufficient data in the medical domain, and other medical problems can benefit from using synthetic augmentation.
图像分类在许多医学应用中是一项重要任务。基于深度学习的方法在计算机视觉领域取得了巨大成就。然而,它们通常依赖于带注释的大规模数据集。在医学领域,如何获取如此大规模的数据集仍然是一个严峻的问题。
在本文中,我们提出一种用于合成医学图像的知识引导对抗增强方法。首先,我们设计术语编码器和图像编码器以从放射科医生那里提取领域知识,然后我们将领域知识作为新的条件来约束辅助分类器生成对抗网络(ACGAN)框架,以合成高质量的甲状腺结节图像。最后,我们在超声甲状腺结节分类任务上展示了我们的方法。我们的方法能够有效利用资深放射科医生的高质量诊断经验。此外,我们创新性地选择从标准化术语而非超声图像中提取领域知识。
我们的新方法在一个包含1937张临床甲状腺超声图像及相应标准化术语的有限数据集上得到了验证。所提出模型对甲状腺结节的准确率为91.46%,灵敏度为90.63%,特异性为92.65%,曲线下面积(AUC)为95.32%,优于当前的甲状腺结节分类方法。实验结果表明该模型具有更好的泛化能力和鲁棒性。
我们认为所提出的方法能够缓解医学领域数据不足的问题,并且其他医学问题也能从使用合成增强中受益。