Suppr超能文献

CoSinGAN:从单张放射图像学习新冠病毒感染分割

CoSinGAN: Learning COVID-19 Infection Segmentation from a Single Radiological Image.

作者信息

Zhang Pengyi, Zhong Yunxin, Deng Yulin, Tang Xiaoying, Li Xiaoqiong

机构信息

School of Life Science, Beijing Institute of Technology, Haidian District, Beijing 100081, China.

Key Laboratory of Convergence Medical Engineering System and Healthcare Technology, Ministry of Industry and Information Technology, Haidian District, Beijing 100081, China.

出版信息

Diagnostics (Basel). 2020 Nov 3;10(11):901. doi: 10.3390/diagnostics10110901.

Abstract

Computed tomography (CT) images are currently being adopted as the visual evidence for COVID-19 diagnosis in clinical practice. Automated detection of COVID-19 infection from CT images based on deep models is important for faster examination. Unfortunately, collecting large-scale training data systematically in the early stage is difficult. To address this problem, we explore the feasibility of learning deep models for lung and COVID-19 infection segmentation from a single radiological image by resorting to synthesizing diverse radiological images. Specifically, we propose a novel conditional generative model, called CoSinGAN, which can be learned from a single radiological image with a given condition, i.e., the annotation mask of the lungs and infected regions. Our CoSinGAN is able to capture the conditional distribution of the single radiological image, and further synthesize high-resolution (512 × 512) and diverse radiological images that match the input conditions precisely. We evaluate the efficacy of CoSinGAN in learning lung and infection segmentation from very few radiological images by performing 5-fold cross validation on COVID-19-CT-Seg dataset (20 CT cases) and an independent testing on the MosMed dataset (50 CT cases). Both 2D U-Net and 3D U-Net, learned from four CT slices by using our CoSinGAN, have achieved notable infection segmentation performance, surpassing the COVID-19-CT-Seg-Benchmark, i.e., the counterparts trained on an average of 704 CT slices, by a large margin. Such results strongly confirm that our method has the potential to learn COVID-19 infection segmentation from few radiological images in the early stage of COVID-19 pandemic.

摘要

计算机断层扫描(CT)图像目前在临床实践中被用作COVID-19诊断的视觉证据。基于深度模型从CT图像中自动检测COVID-19感染对于更快的检查很重要。不幸的是,在早期系统地收集大规模训练数据很困难。为了解决这个问题,我们通过合成各种放射图像来探索从单个放射图像学习用于肺部和COVID-19感染分割的深度模型的可行性。具体来说,我们提出了一种新颖的条件生成模型,称为CoSinGAN,它可以从具有给定条件的单个放射图像中学习,即肺部和感染区域的注释掩码。我们的CoSinGAN能够捕获单个放射图像的条件分布,并进一步合成高分辨率(512×512)且与输入条件精确匹配的各种放射图像。我们通过在COVID-19-CT-Seg数据集(20个CT病例)上进行5折交叉验证以及在MosMed数据集(50个CT病例)上进行独立测试,评估了CoSinGAN在从极少放射图像中学习肺部和感染分割方面的有效性。使用我们的CoSinGAN从四个CT切片学习的2D U-Net和3D U-Net都取得了显著的感染分割性能,大大超过了COVID-19-CT-Seg基准,即在平均704个CT切片上训练的对应模型。这些结果有力地证实了我们的方法在COVID-19大流行早期从极少放射图像中学习COVID-19感染分割的潜力。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/cd4d/7693680/b46d4ec5024a/diagnostics-10-00901-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验