Suppr超能文献

通过正常图像合成对胸部 X 光片中的疾病进行分解的解缠生成模型。

A disentangled generative model for disease decomposition in chest X-rays via normal image synthesis.

机构信息

Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD 20892-1182, USA.

Imaging Biomarkers and Computer-Aided Diagnosis Laboratory, Radiology and Imaging Sciences, National Institutes of Health Clinical Center, Bethesda, MD 20892-1182, USA.

出版信息

Med Image Anal. 2021 Jan;67:101839. doi: 10.1016/j.media.2020.101839. Epub 2020 Oct 7.

Abstract

The interpretation of medical images is a complex cognition procedure requiring cautious observation, precise understanding/parsing of the normal body anatomies, and combining knowledge of physiology and pathology. Interpreting chest X-ray (CXR) images is challenging since the 2D CXR images show the superimposition on internal organs/tissues with low resolution and poor boundaries. Unlike previous CXR computer-aided diagnosis works that focused on disease diagnosis/classification, we firstly propose a deep disentangled generative model (DGM) simultaneously generating abnormal disease residue maps and "radiorealistic" normal CXR images from an input abnormal CXR image. The intuition of our method is based on the assumption that disease regions usually superimpose upon or replace the pixels of normal tissues in an abnormal CXR. Thus, disease regions can be disentangled or decomposed from the abnormal CXR by comparing it with a generated patient-specific normal CXR. DGM consists of three encoder-decoder architecture branches: one for radiorealistic normal CXR image synthesis using adversarial learning, one for disease separation by generating a residue map to delineate the underlying abnormal region, and the other one for facilitating the training process and enhancing the model's robustness on noisy data. A self-reconstruction loss is adopted in the first two branches to enforce the generated normal CXR image to preserve similar visual structures as the original CXR. We evaluated our model on a large-scale chest X-ray dataset. The results show that our model can generate disease residue/saliency maps (coherent with radiologist annotations) along with radiorealistic and patient specific normal CXR images. The disease residue/saliency map can be used by radiologists to improve the CXR reading efficiency in clinical practice. The synthesized normal CXR can be used for data augmentation and normal control of personalized longitudinal disease study. Furthermore, DGM quantitatively boosts the diagnosis performance on several important clinical applications, including normal/abnormal CXR classification, and lung opacity classification/detection.

摘要

医学图像的解释是一个复杂的认知过程,需要仔细观察、精确理解/解析正常的身体解剖结构,并结合生理学和病理学知识。解读胸部 X 光(CXR)图像具有挑战性,因为 2D CXR 图像显示的是内部器官/组织的低分辨率和边界不佳的叠加。与之前专注于疾病诊断/分类的 CXR 计算机辅助诊断工作不同,我们首先提出了一种深度解缠生成模型(DGM),该模型可以从输入的异常 CXR 图像中同时生成异常疾病残留图和“放射真实感”的正常 CXR 图像。我们方法的直觉基于这样的假设,即疾病区域通常叠加或替换异常 CXR 中正常组织的像素。因此,通过将异常 CXR 与生成的特定于患者的正常 CXR 进行比较,可以从异常 CXR 中分离或分解出疾病区域。DGM 由三个编码器-解码器结构分支组成:一个分支用于使用对抗学习生成放射真实感的正常 CXR 图像合成,一个分支用于通过生成残差图来分离疾病,以描绘潜在的异常区域,另一个分支用于促进训练过程并增强模型对噪声数据的鲁棒性。前两个分支采用自重建损失,以强制生成的正常 CXR 图像保留与原始 CXR 相似的视觉结构。我们在一个大规模的胸部 X 射线数据集上评估了我们的模型。结果表明,我们的模型可以生成疾病残留/显著图(与放射科医生的注释一致)以及放射真实感和特定于患者的正常 CXR 图像。疾病残留/显著图可被放射科医生用于提高临床实践中的 CXR 阅读效率。合成的正常 CXR 可用于数据增强和个性化纵向疾病研究的正常对照。此外,DGM 可以定量提高几种重要临床应用的诊断性能,包括正常/异常 CXR 分类以及肺不透明度分类/检测。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验