Suppr超能文献

基于深度学习的胸部X光图像疾病检测与定位目标检测策略

Deep Learning-Based Object Detection Strategies for Disease Detection and Localization in Chest X-Ray Images.

作者信息

Cheng Yi-Ching, Hung Yi-Chieh, Huang Guan-Hua, Chen Tai-Been, Lu Nan-Han, Liu Kuo-Ying, Lin Kuo-Hsuan

机构信息

Institute of Statistics, National Yang Ming Chiao Tung University, Hsinchu 300093, Taiwan.

Department of Radiological Technology, Faculty of Medical Technology, Teikyo University, Tokyo 173-8605, Japan.

出版信息

Diagnostics (Basel). 2024 Nov 22;14(23):2636. doi: 10.3390/diagnostics14232636.

Abstract

BACKGROUND AND OBJECTIVES

Chest X-ray (CXR) images are commonly used to diagnose respiratory and cardiovascular diseases. However, traditional manual interpretation is often subjective, time-consuming, and prone to errors, leading to inconsistent detection accuracy and poor generalization. In this paper, we present deep learning-based object detection methods for automatically identifying and annotating abnormal regions in CXR images.

METHODS

We developed and tested our models using disease-labeled CXR images and location-bounding boxes from E-Da Hospital. Given the prevalence of normal images over diseased ones in clinical settings, we created various training datasets and approaches to assess how different proportions of background images impact model performance. To address the issue of limited examples for certain diseases, we also investigated few-shot object detection techniques. We compared convolutional neural networks (CNNs) and Transformer-based models to determine the most effective architecture for medical image analysis.

RESULTS

The findings show that background image proportions greatly influenced model inference. Moreover, schemes incorporating binary classification consistently improved performance, and CNN-based models outperformed Transformer-based models across all scenarios.

CONCLUSIONS

We have developed a more efficient and reliable system for the automated detection of disease labels and location bounding boxes in CXR images.

摘要

背景与目的

胸部X光(CXR)图像常用于诊断呼吸道和心血管疾病。然而,传统的人工解读往往主观、耗时且容易出错,导致检测准确性不一致且泛化能力差。在本文中,我们提出了基于深度学习的目标检测方法,用于自动识别和标注CXR图像中的异常区域。

方法

我们使用来自义大医院的带有疾病标签的CXR图像和位置边界框来开发和测试我们的模型。鉴于临床环境中正常图像比患病图像更为普遍,我们创建了各种训练数据集和方法,以评估不同比例的背景图像如何影响模型性能。为了解决某些疾病示例有限的问题,我们还研究了少样本目标检测技术。我们比较了卷积神经网络(CNN)和基于Transformer的模型,以确定用于医学图像分析的最有效架构。

结果

研究结果表明,背景图像比例对模型推理有很大影响。此外,纳入二元分类的方案持续提高了性能,并且基于CNN的模型在所有场景下均优于基于Transformer的模型。

结论

我们开发了一个更高效、可靠的系统,用于自动检测CXR图像中的疾病标签和位置边界框。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/d444/11640298/c6372e1cce7a/diagnostics-14-02636-g001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验