Suppr超能文献

OA-GAN:用于合成增强对比医学图像的器官感知生成对抗网络。

OA-GAN: organ-aware generative adversarial network for synthesizing contrast-enhanced medical images.

机构信息

Gradate School of Information Science and Engineering, Ritsumeikan University, Shiga, Japan.

Research Center for Healthcare Data Science, Zhejiang Lab, Hangzhou, People's Republic of China.

出版信息

Biomed Phys Eng Express. 2024 Mar 18;10(3). doi: 10.1088/2057-1976/ad31fa.

Abstract

Contrast-enhanced computed tomography (CE-CT) images are vital for clinical diagnosis of focal liver lesions (FLLs). However, the use of CE-CT images imposes a significant burden on patients due to the injection of contrast agents and extended shooting. Deep learning-based image synthesis models offer a promising solution that synthesizes CE-CT images from non-contrasted CT (NC-CT) images. Unlike natural images, medical image synthesis requires a specific focus on certain organs or localized regions to ensure accurate diagnosis. Determining how to effectively emphasize target organs poses a challenging issue in medical image synthesis. To solve this challenge, we present a novel CE-CT image synthesis model called, Organ-Aware Generative Adversarial Network (OA-GAN). The OA-GAN comprises an organ-aware (OA) network and a dual decoder-based generator. First, the OA network learns the most discriminative spatial features about the target organ (i.e. liver) by utilizing the ground truth organ mask as localization cues. Subsequently, NC-CT image and captured feature are fed into the dual decoder-based generator, which employs a local and global decoder network to simultaneously synthesize the organ and entire CECT image. Moreover, the semantic information extracted from the local decoder is transferred to the global decoder to facilitate better reconstruction of the organ in entire CE-CT image. The qualitative and quantitative evaluation on a CE-CT dataset demonstrates that the OA-GAN outperforms state-of-the-art approaches for synthesizing two types of CE-CT images such as arterial phase and portal venous phase. Additionally, subjective evaluations by expert radiologists and a deep learning-based FLLs classification also affirm that CE-CT images synthesized from the OA-GAN exhibit a remarkable resemblance to real CE-CT images.

摘要

对比增强计算机断层扫描(CE-CT)图像对于局灶性肝脏病变(FLL)的临床诊断至关重要。然而,由于注射造影剂和延长拍摄时间,CE-CT 图像的使用给患者带来了很大的负担。基于深度学习的图像合成模型提供了一种有前途的解决方案,可以从非对比 CT(NC-CT)图像合成 CE-CT 图像。与自然图像不同,医学图像合成需要特别关注某些器官或局部区域,以确保准确的诊断。确定如何有效地强调目标器官是医学图像合成中的一个具有挑战性的问题。为了解决这个挑战,我们提出了一种新的 CE-CT 图像合成模型,称为器官感知生成对抗网络(OA-GAN)。OA-GAN 由器官感知(OA)网络和基于双解码器的生成器组成。首先,OA 网络通过利用真实器官掩模作为定位线索,学习目标器官(即肝脏)最具判别性的空间特征。然后,将 NC-CT 图像和捕获的特征输入到基于双解码器的生成器中,该生成器使用局部和全局解码器网络同时合成器官和整个 CECT 图像。此外,从局部解码器提取的语义信息被传递到全局解码器,以促进整个 CE-CT 图像中器官的更好重建。在 CE-CT 数据集上的定性和定量评估表明,OA-GAN 在合成两种类型的 CE-CT 图像(如动脉期和门静脉期)方面优于最先进的方法。此外,专家放射科医生的主观评估和基于深度学习的 FLL 分类也证实,从 OA-GAN 合成的 CE-CT 图像与真实的 CE-CT 图像非常相似。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验