Suppr超能文献

使用生成对抗网络(CycleGAN)进行数据增强以提高 CT 分割任务的泛化能力。

Data augmentation using generative adversarial networks (CycleGAN) to improve generalizability in CT segmentation tasks.

机构信息

Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Building 10 Room 1C224D MSC 1182, Bethesda, MD, 20892-1182, USA.

Department of Radiology, University of Wisconsin School of Medicine and Public Health, Madison, WI, USA.

出版信息

Sci Rep. 2019 Nov 15;9(1):16884. doi: 10.1038/s41598-019-52737-x.

Abstract

Labeled medical imaging data is scarce and expensive to generate. To achieve generalizable deep learning models large amounts of data are needed. Standard data augmentation is a method to increase generalizability and is routinely performed. Generative adversarial networks offer a novel method for data augmentation. We evaluate the use of CycleGAN for data augmentation in CT segmentation tasks. Using a large image database we trained a CycleGAN to transform contrast CT images into non-contrast images. We then used the trained CycleGAN to augment our training using these synthetic non-contrast images. We compared the segmentation performance of a U-Net trained on the original dataset compared to a U-Net trained on the combined dataset of original data and synthetic non-contrast images. We further evaluated the U-Net segmentation performance on two separate datasets: The original contrast CT dataset on which segmentations were created and a second dataset from a different hospital containing only non-contrast CTs. We refer to these 2 separate datasets as the in-distribution and out-of-distribution datasets, respectively. We show that in several CT segmentation tasks performance is improved significantly, especially in out-of-distribution (noncontrast CT) data. For example, when training the model with standard augmentation techniques, performance of segmentation of the kidneys on out-of-distribution non-contrast images was dramatically lower than for in-distribution data (Dice score of 0.09 vs. 0.94 for out-of-distribution vs. in-distribution data, respectively, p < 0.001). When the kidney model was trained with CycleGAN augmentation techniques, the out-of-distribution (non-contrast) performance increased dramatically (from a Dice score of 0.09 to 0.66, p < 0.001). Improvements for the liver and spleen were smaller, from 0.86 to 0.89 and 0.65 to 0.69, respectively. We believe this method will be valuable to medical imaging researchers to reduce manual segmentation effort and cost in CT imaging.

摘要

标注的医学成像数据稀缺且生成成本高。为了实现可泛化的深度学习模型,需要大量的数据。标准的数据增强是一种提高泛化能力的方法,并且通常会执行该方法。生成对抗网络提供了一种新的数据增强方法。我们评估了在 CT 分割任务中使用 CycleGAN 进行数据增强的效果。我们使用大型图像数据库训练了一个 CycleGAN,将对比 CT 图像转换为非对比图像。然后,我们使用训练好的 CycleGAN 将这些合成的非对比图像添加到我们的训练数据中。我们将在原始数据集上训练的 U-Net 与在原始数据和合成非对比图像的组合数据集上训练的 U-Net 的分割性能进行了比较。我们进一步在两个独立的数据集上评估了 U-Net 的分割性能:原始对比 CT 数据集,在该数据集上创建了分割;以及来自另一家医院的仅包含非对比 CT 的第二个数据集。我们分别将这两个独立的数据集称为分布内数据集和分布外数据集。我们发现,在几个 CT 分割任务中,性能都得到了显著提高,尤其是在分布外(非对比 CT)数据中。例如,当使用标准增强技术训练模型时,模型在分布外非对比 CT 图像上的肾脏分割性能明显低于分布内数据(分布外数据的 Dice 分数为 0.09,分布内数据的 Dice 分数为 0.94,p<0.001)。当使用 CycleGAN 增强技术训练肾脏模型时,分布外(非对比)性能显著提高(Dice 分数从 0.09 提高到 0.66,p<0.001)。肝脏和脾脏的提高较小,分别从 0.86 提高到 0.89,从 0.65 提高到 0.69。我们相信,这种方法将对医学成像研究人员非常有价值,可以减少 CT 成像中的手动分割工作和成本。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/afde/6858365/9cf2c0c537ed/41598_2019_52737_Fig1_HTML.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验