• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

在NIH ChestX-Ray14数据集中用于多标签胸部X光异常检测的曼巴、Transformer和卷积神经网络(CNN)架构的比较分析

A Comparative Analysis of the Mamba, Transformer, and CNN Architectures for Multi-Label Chest X-Ray Anomaly Detection in the NIH ChestX-Ray14 Dataset.

作者信息

Yanar Erdem, Kutan Furkan, Ayturan Kubilay, Kutbay Uğurhan, Algın Oktay, Hardalaç Fırat, Ağıldere Ahmet Muhteşem

机构信息

Department of Healthcare Systems System Engineering, ASELSAN, 06200 Ankara, Turkey.

Department of Test and Verification Engineering, ASELSAN, 06200 Ankara, Turkey.

出版信息

Diagnostics (Basel). 2025 Sep 1;15(17):2215. doi: 10.3390/diagnostics15172215.

DOI:10.3390/diagnostics15172215
PMID:40941702
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12428523/
Abstract

Recent state-of-the-art advances in deep learning have significantly improved diagnostic accuracy in medical imaging, particularly in chest radiograph (CXR) analysis. Motivated by these developments, a comprehensive comparison was conducted to investigate how architectural choices affect performance of 14 deep learning models across Convolutional Neural Networks (CNNs), Transformer-based models, and Mamba-based State Space Models. These models were trained and evaluated under identical conditions on the NIH ChestX-ray14 dataset, a large-scale and widely used benchmark comprising 112,120 labeled CXR images with 14 thoracic disease categories. It was found that recent hybrid architectures-particularly ConvFormer, CaFormer, and EfficientNet-deliver superior performance in both common and rare pathologies. ConvFormer achieved the highest mean AUROC of 0.841 when averaged across all 14 thoracic disease classes, closely followed by EfficientNet and CaFormer. Notably, AUROC scores of 0.94 for hernia, 0.91 for cardiomegaly, and 0.88 for edema and effusion were achieved by the proposed models, surpassing previously reported benchmarks. These results not only highlight the continued strength of CNNs but also demonstrate the growing potential of Transformer-based architectures in medical image analysis. This work contributes to the literature by providing a unified, state-of-the-art benchmarking of diverse deep learning models, offering valuable guidance for researchers and practitioners developing clinically robust AI systems for radiology.

摘要

深度学习领域的最新技术进展显著提高了医学成像的诊断准确性,尤其是在胸部X光片(CXR)分析方面。受这些进展的推动,我们进行了一项全面比较,以研究架构选择如何影响14种深度学习模型在卷积神经网络(CNN)、基于Transformer的模型和基于曼巴的状态空间模型中的性能。这些模型在相同条件下在NIH ChestX-ray14数据集上进行训练和评估,该数据集是一个大规模且广泛使用的基准,包含112120张标记的CXR图像,涵盖14种胸部疾病类别。研究发现,最近的混合架构——特别是ConvFormer、CaFormer和EfficientNet——在常见和罕见病症中均表现出卓越性能。在所有14种胸部疾病类别上平均计算时,ConvFormer的平均AUROC最高,达到0.841,紧随其后的是EfficientNet和CaFormer。值得注意的是,所提出的模型在疝气方面的AUROC分数达到0.94,在心脏肥大方面为0.91,在水肿和积液方面为0.88,超过了先前报道的基准。这些结果不仅突出了CNN的持续优势,还展示了基于Transformer的架构在医学图像分析中日益增长的潜力。这项工作通过为各种深度学习模型提供统一的、最新的基准测试,为文献做出了贡献,为开发用于放射学的临床稳健人工智能系统的研究人员和从业者提供了有价值的指导。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/d70a0cb13d35/diagnostics-15-02215-g026.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/8409a62ea7e0/diagnostics-15-02215-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/0b317eb2f406/diagnostics-15-02215-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/20b512359b06/diagnostics-15-02215-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/35f05f866e5e/diagnostics-15-02215-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/9dc64bff0954/diagnostics-15-02215-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/3b4b0caf07bd/diagnostics-15-02215-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/f2936b588c8a/diagnostics-15-02215-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/badc38619540/diagnostics-15-02215-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/e62514aec545/diagnostics-15-02215-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/4590b06554a4/diagnostics-15-02215-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/9cc7ed45f195/diagnostics-15-02215-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/182bb818e45d/diagnostics-15-02215-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/6bb1f338b14a/diagnostics-15-02215-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/5ebfcd7c0471/diagnostics-15-02215-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/9dd43a7ee409/diagnostics-15-02215-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/c9668340d5d6/diagnostics-15-02215-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/ab4af890ed0f/diagnostics-15-02215-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/9ad445352bc3/diagnostics-15-02215-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/bbd468145705/diagnostics-15-02215-g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/3b1c95a308d6/diagnostics-15-02215-g020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/24b55895f646/diagnostics-15-02215-g021.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/e4664f56f9d2/diagnostics-15-02215-g022.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/98af462f147b/diagnostics-15-02215-g023.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/9bba7b123c98/diagnostics-15-02215-g024.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/4258b739ec4d/diagnostics-15-02215-g025.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/d70a0cb13d35/diagnostics-15-02215-g026.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/8409a62ea7e0/diagnostics-15-02215-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/0b317eb2f406/diagnostics-15-02215-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/20b512359b06/diagnostics-15-02215-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/35f05f866e5e/diagnostics-15-02215-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/9dc64bff0954/diagnostics-15-02215-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/3b4b0caf07bd/diagnostics-15-02215-g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/f2936b588c8a/diagnostics-15-02215-g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/badc38619540/diagnostics-15-02215-g008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/e62514aec545/diagnostics-15-02215-g009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/4590b06554a4/diagnostics-15-02215-g010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/9cc7ed45f195/diagnostics-15-02215-g011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/182bb818e45d/diagnostics-15-02215-g012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/6bb1f338b14a/diagnostics-15-02215-g013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/5ebfcd7c0471/diagnostics-15-02215-g014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/9dd43a7ee409/diagnostics-15-02215-g015.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/c9668340d5d6/diagnostics-15-02215-g016.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/ab4af890ed0f/diagnostics-15-02215-g017.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/9ad445352bc3/diagnostics-15-02215-g018.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/bbd468145705/diagnostics-15-02215-g019.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/3b1c95a308d6/diagnostics-15-02215-g020.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/24b55895f646/diagnostics-15-02215-g021.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/e4664f56f9d2/diagnostics-15-02215-g022.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/98af462f147b/diagnostics-15-02215-g023.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/9bba7b123c98/diagnostics-15-02215-g024.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/4258b739ec4d/diagnostics-15-02215-g025.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/a191/12428523/d70a0cb13d35/diagnostics-15-02215-g026.jpg

相似文献

1
A Comparative Analysis of the Mamba, Transformer, and CNN Architectures for Multi-Label Chest X-Ray Anomaly Detection in the NIH ChestX-Ray14 Dataset.在NIH ChestX-Ray14数据集中用于多标签胸部X光异常检测的曼巴、Transformer和卷积神经网络(CNN)架构的比较分析
Diagnostics (Basel). 2025 Sep 1;15(17):2215. doi: 10.3390/diagnostics15172215.
2
CXR-MultiTaskNet a unified deep learning framework for joint disease localization and classification in chest radiographs.CXR-MultiTaskNet:一种用于胸部X光片中疾病联合定位与分类的统一深度学习框架。
Sci Rep. 2025 Aug 31;15(1):32022. doi: 10.1038/s41598-025-16669-z.
3
Prescription of Controlled Substances: Benefits and Risks管制药品的处方:益处与风险
4
Selective State Space Models Outperform Transformers at Predicting RNA-Seq Read Coverage.在预测RNA测序读段覆盖度方面,选择性状态空间模型优于Transformer模型。
bioRxiv. 2025 Feb 17:2025.02.13.638190. doi: 10.1101/2025.02.13.638190.
5
Comparison of Two Modern Survival Prediction Tools, SORG-MLA and METSSS, in Patients With Symptomatic Long-bone Metastases Who Underwent Local Treatment With Surgery Followed by Radiotherapy and With Radiotherapy Alone.两种现代生存预测工具 SORG-MLA 和 METSSS 在接受手术联合放疗和单纯放疗治疗有症状长骨转移患者中的比较。
Clin Orthop Relat Res. 2024 Dec 1;482(12):2193-2208. doi: 10.1097/CORR.0000000000003185. Epub 2024 Jul 23.
6
A deep learning approach to direct immunofluorescence pattern recognition in autoimmune bullous diseases.深度学习方法在自身免疫性大疱性疾病中的直接免疫荧光模式识别。
Br J Dermatol. 2024 Jul 16;191(2):261-266. doi: 10.1093/bjd/ljae142.
7
Comparative analysis of convolutional neural networks and transformer architectures for breast cancer histopathological image classification.用于乳腺癌组织病理学图像分类的卷积神经网络与Transformer架构的比较分析
Front Med (Lausanne). 2025 Jun 17;12:1606336. doi: 10.3389/fmed.2025.1606336. eCollection 2025.
8
Comparative Evaluation of CNN and Transformer Architectures for Flowering Phase Classification of Mill. with Automated Image Quality Filtering.用于Mill.花期分类的卷积神经网络(CNN)和Transformer架构的比较评估及自动图像质量过滤
Sensors (Basel). 2025 Aug 27;25(17):5326. doi: 10.3390/s25175326.
9
Comprehensive Segmentation of Gray Matter Structures on T1-Weighted Brain MRI: A Comparative Study of Convolutional Neural Network, Convolutional Neural Network Hybrid-Transformer or -Mamba Architectures.基于T1加权脑磁共振成像的灰质结构综合分割:卷积神经网络、卷积神经网络混合Transformer或Mamba架构的比较研究
AJNR Am J Neuroradiol. 2025 Apr 2;46(4):742-749. doi: 10.3174/ajnr.A8544.
10
Systematic Review of Hybrid Vision Transformer Architectures for Radiological Image Analysis.用于放射图像分析的混合视觉Transformer架构的系统综述
J Imaging Inform Med. 2025 Jan 27. doi: 10.1007/s10278-024-01322-4.

本文引用的文献

1
CELM: An Ensemble Deep Learning Model for Early Cardiomegaly Diagnosis in Chest Radiography.CELM:一种用于胸部X光片中早期心脏肥大诊断的集成深度学习模型。
Diagnostics (Basel). 2025 Jun 25;15(13):1602. doi: 10.3390/diagnostics15131602.
2
Convolutional Neural Network-Vision Transformer Architecture with Gated Control Mechanism and Multi-Scale Fusion for Enhanced Pulmonary Disease Classification.具有门控控制机制和多尺度融合的卷积神经网络-视觉Transformer架构用于增强型肺部疾病分类
Diagnostics (Basel). 2024 Dec 12;14(24):2790. doi: 10.3390/diagnostics14242790.
3
MetaFormer Baselines for Vision.
视觉领域的MetaFormer基线模型
IEEE Trans Pattern Anal Mach Intell. 2023 Nov 1;PP. doi: 10.1109/TPAMI.2023.3329173.
4
Deep Learning Methods for Chest Disease Detection Using Radiography Images.基于X光图像的胸部疾病检测深度学习方法
SN Comput Sci. 2023;4(4):388. doi: 10.1007/s42979-023-01818-w. Epub 2023 May 11.
5
A comparative study of the inter-observer variability on Gleason grading against Deep Learning-based approaches for prostate cancer.基于深度学习的前列腺癌 Gleason 分级方法的观察者间变异性比较研究。
Comput Biol Med. 2023 Jun;159:106856. doi: 10.1016/j.compbiomed.2023.106856. Epub 2023 Apr 6.
6
Attention-based Saliency Maps Improve Interpretability of Pneumothorax Classification.基于注意力的显著图提高气胸分类的可解释性。
Radiol Artif Intell. 2022 Mar 1;5(2):e220187. doi: 10.1148/ryai.220187. eCollection 2023 Mar.
7
A multichannel EfficientNet deep learning-based stacking ensemble approach for lung disease detection using chest X-ray images.一种基于多通道高效神经网络的深度学习堆叠集成方法,用于利用胸部X光图像检测肺部疾病。
Cluster Comput. 2023;26(2):1181-1203. doi: 10.1007/s10586-022-03664-6. Epub 2022 Jul 19.
8
Automated abnormality classification of chest radiographs using deep convolutional neural networks.使用深度卷积神经网络对胸部X光片进行自动异常分类。
NPJ Digit Med. 2020 May 14;3:70. doi: 10.1038/s41746-020-0273-z. eCollection 2020.
9
Comparison of Deep Learning Approaches for Multi-Label Chest X-Ray Classification.深度学习方法在多标签胸部 X 射线分类中的比较。
Sci Rep. 2019 Apr 23;9(1):6381. doi: 10.1038/s41598-019-42294-8.