Suppr超能文献

基于胶囊网络和 CT 扫描最大密度投影图像对社区获得性肺炎中 COVID-19 的分类:提高性能。

Classification of COVID-19 from community-acquired pneumonia: Boosting the performance with capsule network and maximum intensity projection image of CT scans.

机构信息

College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China.

Research Center for Healthcare Data Science, Zhejiang Lab, Hangzhou, China.

出版信息

Comput Biol Med. 2023 Mar;154:106567. doi: 10.1016/j.compbiomed.2023.106567. Epub 2023 Jan 23.

Abstract

BACKGROUND

The coronavirus disease 2019 (COVID-19) and community-acquired pneumonia (CAP) present a high degree of similarity in chest computed tomography (CT) images. Therefore, a procedure for accurately and automatically distinguishing between them is crucial.

METHODS

A deep learning method for distinguishing COVID-19 from CAP is developed using maximum intensity projection (MIP) images from CT scans. LinkNet is employed for lung segmentation of chest CT images. MIP images are produced by superposing the maximum gray of intrapulmonary CT values. The MIP images are input into a capsule network for patient-level pred iction and diagnosis of COVID-19. The network is trained using 333 CT scans (168 COVID-19/165 CAP) and validated on three external datasets containing 3581 CT scans (2110 COVID-19/1471 CAP).

RESULTS

LinkNet achieves the highest Dice coefficient of 0.983 for lung segmentation. For the classification of COVID-19 and CAP, the capsule network with the DenseNet-121 feature extractor outperforms ResNet-50 and Inception-V3, achieving an accuracy of 0.970 on the training dataset. Without MIP or the capsule network, the accuracy decreases to 0.857 and 0.818, respectively. Accuracy scores of 0.961, 0.997, and 0.949 are achieved on the external validation datasets. The proposed method has higher or comparable sensitivity compared with ten state-of-the-art methods.

CONCLUSIONS

The proposed method illustrates the feasibility of applying MIP images from CT scans to distinguish COVID-19 from CAP using capsule networks. MIP images provide conspicuous benefits when exploiting deep learning to detect COVID-19 lesions from CT scans and the capsule network improves COVID-19 diagnosis.

摘要

背景

2019 年冠状病毒病(COVID-19)和社区获得性肺炎(CAP)在胸部计算机断层扫描(CT)图像上具有高度相似性。因此,准确且自动区分它们的程序至关重要。

方法

使用 CT 扫描的最大强度投影(MIP)图像开发了一种用于区分 COVID-19 和 CAP 的深度学习方法。LinkNet 用于对胸部 CT 图像进行肺分割。MIP 图像是通过叠加肺部 CT 值的最大灰度来生成的。将 MIP 图像输入胶囊网络,以进行患者级别的 COVID-19 预测和诊断。该网络使用 333 个 CT 扫描(168 例 COVID-19/165 例 CAP)进行训练,并在包含 3581 个 CT 扫描的三个外部数据集上进行验证(2110 例 COVID-19/1471 例 CAP)。

结果

LinkNet 实现了 0.983 的最高肺分割 Dice 系数。对于 COVID-19 和 CAP 的分类,使用 DenseNet-121 特征提取器的胶囊网络优于 ResNet-50 和 Inception-V3,在训练数据集上的准确率为 0.970。没有 MIP 或胶囊网络,准确率分别下降到 0.857 和 0.818。在外部验证数据集上的准确率分别为 0.961、0.997 和 0.949。与十种最先进的方法相比,所提出的方法具有更高或相当的敏感性。

结论

该方法说明了使用胶囊网络从 CT 扫描的 MIP 图像区分 COVID-19 和 CAP 的可行性。MIP 图像在利用深度学习从 CT 扫描中检测 COVID-19 病变方面具有明显的优势,并且胶囊网络提高了 COVID-19 的诊断准确率。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/64cd/9869624/a85c404e55fb/gr1_lrg.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验