Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, South Carolina.
Division of Cardiovascular Imaging, Department of Radiology and Radiological Science, Medical University of South Carolina, Charleston, South Carolina.
Acad Radiol. 2022 Feb;29 Suppl 2:S108-S117. doi: 10.1016/j.acra.2021.02.007. Epub 2021 Mar 10.
Research on implementation of artificial intelligence (AI) in radiology workflows and its impact on reports remains scarce. In this study, we aim to assess if an AI platform would perform better than clinical radiology reports in evaluating noncontrast chest computed tomography (CT) scans.
Consecutive patients who had undergone noncontrast chest CT were retrospectively identified. The radiology reports were reviewed in a binary fashion for reporting of pulmonary lesions, pulmonary emphysema, aortic dilatation, coronary artery calcifications (CAC), and vertebral compression fractures (VCF). CT scans were then processed using an AI platform. The reports' findings and the AI results were subsequently compared to a consensus read by two board-certificated radiologists as reference.
A total of 100 patients (mean age: 64.2 ± 14.8 years; 57% males) were included in this study. Aortic segmentation and calcium quantification failed to be processed by AI in 2 and 3 cases, respectively. AI showed superior diagnostic performance in identifying aortic dilatation (AI: sensitivity: 96.3%, specificity: 81.4%, AUC: 0.89) vs (Reports: sensitivity: 25.9%, specificity: 100%, AUC: 0.63), p <0.001; and CAC (AI: sensitivity: 89.8%, specificity: 100, AUC: 0.95) vs (Reports: sensitivity: 75.4%, specificity: 94.9%, AUC: 0.85), p = 0.005. Reports had better performance than AI in identifying pulmonary lesions (Reports: sensitivity: 97.6%, specificity: 100%, AUC: 0.99) vs (AI: sensitivity: 92.8%, specificity: 82.4%, AUC: 0.88), p = 0.024; and VCF (Reports: sensitivity:100%, specificity: 100%, AUC: 1.0) vs (AI: sensitivity: 100%, specificity: 63.7%, AUC: 0.82), p <0.001. A comparable diagnostic performance was noted in identifying pulmonary emphysema on AI (sensitivity: 80.6%, specificity: 66.7%. AUC: 0.74) and reports (sensitivity: 74.2%, specificity: 97.1%, AUC: 0.86), p = 0.064.
Our results demonstrate that incorporating AI support platforms into radiology workflows can provide significant added value to clinical radiology reporting.
关于人工智能(AI)在放射学工作流程中的实施及其对报告的影响的研究仍然很少。在这项研究中,我们旨在评估 AI 平台是否比临床放射学报告在评估非对比胸部 CT 扫描方面表现更好。
回顾性地确定了接受非对比胸部 CT 检查的连续患者。以二进制方式对放射学报告进行了评估,以评估肺病变、肺气肿、主动脉扩张、冠状动脉钙化(CAC)和椎体压缩性骨折(VCF)的报告情况。然后使用 AI 平台对 CT 扫描进行处理。报告的发现和 AI 结果随后与由两名认证放射科医生进行的共识阅读进行了比较作为参考。
本研究共纳入 100 例患者(平均年龄:64.2±14.8 岁;57%为男性)。AI 在 2 例和 3 例中分别未能成功处理主动脉分割和钙定量。AI 在识别主动脉扩张方面表现出优于报告的诊断性能(AI:敏感性:96.3%,特异性:81.4%,AUC:0.89)与(报告:敏感性:25.9%,特异性:100%,AUC:0.63),p<0.001;和 CAC(AI:敏感性:89.8%,特异性:100%,AUC:0.95)与(报告:敏感性:75.4%,特异性:94.9%,AUC:0.85),p=0.005。报告在识别肺病变方面的表现优于 AI(报告:敏感性:97.6%,特异性:100%,AUC:0.99)与(AI:敏感性:92.8%,特异性:82.4%,AUC:0.88),p=0.024;和 VCF(报告:敏感性:100%,特异性:100%,AUC:1.0)与(AI:敏感性:100%,特异性:63.7%,AUC:0.82),p<0.001。在识别肺气肿方面,AI(敏感性:80.6%,特异性:66.7%。AUC:0.74)和报告(敏感性:74.2%,特异性:97.1%,AUC:0.86)的诊断性能相当,p=0.064。
我们的结果表明,将 AI 支持平台纳入放射学工作流程可为临床放射学报告提供重要的附加价值。