Suppr超能文献

用于鉴别肺磨玻璃结节良恶性的多模态深度学习网络

Multimodal Deep Learning Network for Differentiating Between Benign and Malignant Pulmonary Ground Glass Nodules.

作者信息

Liu Gang, Liu Fei, Mao Xu, Xie Xiaoting, Sang Jingyao, Ma Husai, Yang Haiyun, He Hui

机构信息

Department of Radiological Interventional, Qinghai Red Cross Hospital, Xining, China.

Department of Thoracic Surgery, Qinghai Red Cross Hospital,Xining, China.

出版信息

Curr Med Imaging. 2024;20:e15734056301741. doi: 10.2174/0115734056301741240903072017.

Abstract

OBJECTIVE

This study aimed to establish a multimodal deep-learning network model to enhance the diagnosis of benign and malignant pulmonary ground glass nodules (GGNs).

METHODS

Retrospective data on pulmonary GGNs were collected from multiple centers across China, including North, Northeast, Northwest, South, and Southwest China. The data were divided into a training set and a validation set in an 8:2 ratio. In addition, a GGN dataset was also obtained from our hospital database and used as the test set. All patients underwent chest computed tomography (CT), and the final diagnosis of the nodules was based on postoperative pathological reports. The Residual Network (ResNet) was used to extract imaging data, the Word2Vec method for semantic information extraction, and the Self Attention method for combining imaging features and patient data to construct a multimodal classification model. Then, the diagnostic efficiency of the proposed multimodal model was compared with that of existing ResNet and VGG models and radiologists

RESULTS

The multicenter dataset comprised 1020 GGNs, including 265 benign and 755 malignant nodules, and the test dataset comprised 204 GGNs, with 67 benign and 137 malignant nodules. In the validation set, the proposed multimodal model achieved an accuracy of 90.2%, a sensitivity of 96.6%, and a specificity of 75.0%, which surpassed that of the VGG (73.1%, 76.7%, and 66.5%) and ResNet (78.0%, 83.3%, and 65.8%) models in diagnosing benign and malignant nodules. In the test set, the multimodal model accurately diagnosed 125 (91.18%) malignant nodules, outperforming radiologists (80.37% accuracy). Moreover, the multimodal model correctly identified 54 (accuracy, 80.70%) benign nodules, compared to radiologists' accuracy of 85.47%. The consistency test comparing radiologists' diagnostic results with the multimodal model's results in relation to postoperative pathology showed strong agreement, with the multimodal model demonstrating closer alignment with gold standard pathological findings (Kappa=0.720, P<0.01).

CONCLUSION

The multimodal deep learning network model exhibited promising diagnostic effectiveness in distinguishing benign and malignant GGNs and, therefore, holds potential as a reference tool to assist radiologists in improving the diagnostic accuracy of GGNs, potentially enhancing their work efficiency in clinical settings.

摘要

目的

本研究旨在建立一种多模态深度学习网络模型,以提高对肺磨玻璃结节(GGN)良恶性的诊断能力。

方法

收集了来自中国多个中心(包括华北、东北、西北、华南和西南地区)的肺GGN回顾性数据。数据按8:2的比例分为训练集和验证集。此外,还从我院数据库中获取了一个GGN数据集作为测试集。所有患者均接受了胸部计算机断层扫描(CT),结节的最终诊断基于术后病理报告。使用残差网络(ResNet)提取影像数据,采用词向量(Word2Vec)方法提取语义信息,并使用自注意力方法将影像特征与患者数据相结合,构建多模态分类模型。然后,将所提出的多模态模型的诊断效率与现有的ResNet和VGG模型以及放射科医生的诊断效率进行比较。

结果

多中心数据集包含1020个GGN,其中良性结节265个,恶性结节755个,测试数据集包含204个GGN,其中良性结节67个,恶性结节137个。在验证集中,所提出的多模态模型的准确率为90.2%,灵敏度为96.6%,特异度为75.0%,在诊断GGN良恶性方面超过了VGG模型(73.1%、76.7%和66.5%)和ResNet模型(78.0%、83.3%和65.8%)。在测试集中,多模态模型准确诊断出125个(91.18%)恶性结节,优于放射科医生(准确率80.37%)。此外,多模态模型正确识别出54个(准确率80.70%)良性结节,而放射科医生的准确率为85.47%。将放射科医生的诊断结果与多模态模型关于术后病理的结果进行一致性检验,结果显示一致性较强,多模态模型与金标准病理结果更为接近(Kappa=0.720,P<0.01)。

结论

多模态深度学习网络模型在区分GGN良恶性方面表现出了有前景的诊断效能,因此,有望作为一种参考工具,协助放射科医生提高GGN的诊断准确性,可能提高他们在临床环境中的工作效率。

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验