Suppr超能文献

基于多目标协同引导对抗机制的真实肺部结节合成。

Realistic Lung Nodule Synthesis With Multi-Target Co-Guided Adversarial Mechanism.

出版信息

IEEE Trans Med Imaging. 2021 Sep;40(9):2343-2353. doi: 10.1109/TMI.2021.3077089. Epub 2021 Aug 31.

Abstract

The important cues for a realistic lung nodule synthesis include the diversity in shape and background, controllability of semantic feature levels, and overall CT image quality. To incorporate these cues as the multiple learning targets, we introduce the Multi-Target Co-Guided Adversarial Mechanism, which utilizes the foreground and background mask to guide nodule shape and lung tissues, takes advantage of the CT lung and mediastinal window as the guidance of spiculation and texture control, respectively. Further, we propose a Multi-Target Co-Guided Synthesizing Network with a joint loss function to realize the co-guidance of image generation and semantic feature learning. The proposed network contains a Mask-Guided Generative Adversarial Sub-Network (MGGAN) and a Window-Guided Semantic Learning Sub-Network (WGSLN). The MGGAN generates the initial synthesis using the mask combined with the foreground and background masks, guiding the generation of nodule shape and background tissues. Meanwhile, the WGSLN controls the semantic features and refines the synthesis quality by transforming the initial synthesis into the CT lung and mediastinal window, and performing the spiculation and texture learning simultaneously. We validated our method using the quantitative analysis of authenticity under the Fréchet Inception Score, and the results show its state-of-the-art performance. We also evaluated our method as a data augmentation method to predict malignancy level on the LIDC-IDRI database, and the results show that the accuracy of VGG-16 is improved by 5.6%. The experimental results confirm the effectiveness of the proposed method.

摘要

实现逼真肺结节合成的重要线索包括形状和背景的多样性、语义特征水平的可控性以及整体 CT 图像质量。为了将这些线索作为多个学习目标纳入其中,我们引入了多目标协同引导对抗机制,该机制利用前景和背景掩模来引导结节形状和肺组织,分别利用 CT 肺部和纵隔窗来引导毛刺和纹理控制。此外,我们提出了一种具有联合损失函数的多目标协同合成网络,以实现图像生成和语义特征学习的协同引导。该网络包含一个掩模引导生成对抗子网(MGGAN)和一个窗口引导语义学习子网(WGSLN)。MGGAN 使用掩模与前景和背景掩模相结合生成初始合成,引导结节形状和背景组织的生成。同时,WGSLN 通过将初始合成转换为 CT 肺部和纵隔窗来控制语义特征,并同时进行毛刺和纹理学习,从而提高合成质量。我们使用 Fréchet Inception Score 下的真实性定量分析验证了我们的方法,结果表明其具有最先进的性能。我们还将我们的方法评估为一种数据增强方法,以预测 LIDC-IDRI 数据库中的恶性程度,结果表明 VGG-16 的准确率提高了 5.6%。实验结果证实了所提出方法的有效性。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验