College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China; Key Laboratory of Intelligent Computing in Medical Image, Ministry of Education, Northeastern University, Shenyang, China.
College of Medicine and Biological Information Engineering, Northeastern University, Shenyang, China.
Comput Methods Programs Biomed. 2021 Nov;211:106406. doi: 10.1016/j.cmpb.2021.106406. Epub 2021 Sep 9.
Given that the novel coronavirus disease 2019 (COVID-19) has become a pandemic, a method to accurately distinguish COVID-19 from community-acquired pneumonia (CAP) is urgently needed. However, the spatial uncertainty and morphological diversity of COVID-19 lesions in the lungs, and subtle differences with respect to CAP, make differential diagnosis non-trivial.
We propose a deep represented multiple instance learning (DR-MIL) method to fulfill this task. A 3D volumetric CT scan of one patient is treated as one bag and ten CT slices are selected as the initial instances. For each instance, deep features are extracted from the pre-trained ResNet-50 with fine-tuning and represented as one deep represented instance score (DRIS). Each bag with a DRIS for each initial instance is then input into a citation k-nearest neighbor search to generate the final prediction. A total of 141 COVID-19 and 100 CAP CT scans were used. The performance of DR-MIL is compared with other potential strategies and state-of-the-art models.
DR-MIL displayed an accuracy of 95% and an area under curve of 0.943, which were superior to those observed for comparable methods. COVID-19 and CAP exhibited significant differences in both the DRIS and the spatial pattern of lesions (p<0.001). As a means of content-based image retrieval, DR-MIL can identify images used as key instances, references, and citers for visual interpretation.
DR-MIL can effectively represent the deep characteristics of COVID-19 lesions in CT images and accurately distinguish COVID-19 from CAP in a weakly supervised manner. The resulting DRIS is a useful supplement to visual interpretation of the spatial pattern of lesions when screening for COVID-19.
鉴于新型冠状病毒病 2019(COVID-19)已成为大流行,因此迫切需要一种准确区分 COVID-19 与社区获得性肺炎(CAP)的方法。然而,COVID-19 肺部病变的空间不确定性和形态多样性,以及与 CAP 的细微差异,使得鉴别诊断变得具有挑战性。
我们提出了一种深度表示的多实例学习(DR-MIL)方法来完成这项任务。将一位患者的 3D 容积 CT 扫描视为一个袋子,并选择十个 CT 切片作为初始实例。对于每个实例,从经过微调的预训练 ResNet-50 中提取深度特征,并表示为一个深度表示实例得分(DRIS)。然后,将每个袋子中的每个初始实例的 DRIS 输入引文 k-最近邻搜索中,以生成最终预测。共使用了 141 例 COVID-19 和 100 例 CAP CT 扫描。将 DR-MIL 的性能与其他潜在策略和最先进的模型进行了比较。
DR-MIL 的准确率为 95%,曲线下面积为 0.943,优于可比方法的观察结果。COVID-19 和 CAP 在 DRIS 和病变的空间模式方面均存在显著差异(p<0.001)。作为基于内容的图像检索方法,DR-MIL 可以识别用作关键实例、参考和引用的图像,用于视觉解释。
DR-MIL 可以有效地表示 CT 图像中 COVID-19 病变的深度特征,并以弱监督的方式准确区分 COVID-19 与 CAP。所得的 DRIS 是 COVID-19 筛查中病变空间模式视觉解释的有用补充。