利用自一致性损失引导图像分解进行可推广的胸片诊断。

Generalizable diagnosis of chest radiographs through attention-guided decomposition of images utilizing self-consistency loss.

机构信息

Department of Computer Science and Engineering, Indian Institute of Technology Jodhpur, N.H. 62, Nagaur Road, Karwar, Jodhpur, 342030, Rajasthan, India.

出版信息

Comput Biol Med. 2024 Sep;180:108922. doi: 10.1016/j.compbiomed.2024.108922. Epub 2024 Jul 31.

Abstract

BACKGROUND

Chest X-ray (CXR) is one of the most commonly performed imaging tests worldwide. Due to its wide usage, there is a growing need for automated and generalizable methods to accurately diagnose these images. Traditional methods for chest X-ray analysis often struggle with generalization across diverse datasets due to variations in imaging protocols, patient demographics, and the presence of overlapping anatomical structures. Therefore, there is a significant demand for advanced diagnostic tools that can consistently identify abnormalities across different patient populations and imaging settings. We propose a method that can provide a generalizable diagnosis of chest X-ray.

METHOD

Our method utilizes an attention-guided decomposer network (ADSC) to extract disease maps from chest X-ray images. The ADSC employs one encoder and multiple decoders, incorporating a novel self-consistency loss to ensure consistent functionality across its modules. The attention-guided encoder captures salient features of abnormalities, while three distinct decoders generate a normal synthesized image, a disease map, and a reconstructed input image, respectively. A discriminator differentiates the real and the synthesized normal chest X-rays, enhancing the quality of generated images. The disease map along with the original chest X-ray image are fed to a DenseNet-121 classifier modified for multi-class classification of the input X-ray.

RESULTS

Experimental results on multiple publicly available datasets demonstrate the effectiveness of our approach. For multi-class classification, we achieve up to a 3% improvement in AUROC score for certain abnormalities compared to the existing methods. For binary classification (normal versus abnormal), our method surpasses existing approaches across various datasets. In terms of generalizability, we train our model on one dataset and tested it on multiple datasets. The standard deviation of AUROC scores for different test datasets is calculated to measure the variability of performance across datasets. Our model exhibits superior generalization across datasets from diverse sources.

CONCLUSIONS

Our model shows promising results for the generalizable diagnosis of chest X-rays. The impacts of using the attention mechanism and the self-consistency loss in our method are evident from the results. In the future, we plan to incorporate Explainable AI techniques to provide explanations for model decisions. Additionally, we aim to design data augmentation techniques to reduce class imbalance in our model.

摘要

背景

胸部 X 光(CXR)是全球最常用的成像测试之一。由于其广泛的使用,对于能够准确诊断这些图像的自动化和可推广方法的需求日益增长。传统的胸部 X 光分析方法由于成像协议、患者人口统计学和重叠解剖结构的变化,往往难以在不同数据集之间进行推广。因此,需要先进的诊断工具,能够在不同的患者群体和成像环境中一致地识别异常。我们提出了一种可以提供胸部 X 光的可推广诊断的方法。

方法

我们的方法利用注意力引导分解器网络(ADSC)从胸部 X 光图像中提取疾病图谱。ADSC 使用一个编码器和多个解码器,采用新颖的自一致性损失来确保其模块的一致性。注意力引导编码器捕获异常的显著特征,而三个不同的解码器分别生成正常合成图像、疾病图谱和重构输入图像。鉴别器区分真实和合成的正常胸部 X 射线,提高生成图像的质量。疾病图谱和原始胸部 X 射线图像一起输入到经过修改的用于输入 X 射线的多类分类的 DenseNet-121 分类器中。

结果

在多个公开可用的数据集上的实验结果表明了我们方法的有效性。对于多类分类,与现有方法相比,我们的方法在某些异常情况下的 AUROC 评分提高了 3%。对于二分类(正常与异常),我们的方法在各种数据集上都优于现有方法。在可推广性方面,我们在一个数据集上训练模型,并在多个数据集上进行测试。为了衡量模型在不同数据集上的性能变化,计算了不同测试数据集的 AUROC 得分的标准差。我们的模型在来自不同来源的数据集之间表现出了卓越的泛化能力。

结论

我们的模型在胸部 X 光的可推广诊断方面取得了有希望的结果。从结果中可以明显看出我们方法中使用注意力机制和自一致性损失的影响。在未来,我们计划结合可解释 AI 技术为模型决策提供解释。此外,我们计划设计数据增强技术来减少模型中的类别不平衡。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索