From the VA National Teleradiology Program, 795 Willow Rd, Bldg 3342, Menlo Park, CA 94025 (A.J.D.G., R.S.); VA Palo Alto Health Care System, Palo Alto, Calif (T.F.O.); Department of Radiology, Stanford University School of Medicine, Stanford, Calif (T.F.O.); and VA Health Solutions, Patient Care Services, Washington, DC (T.S.).
Radiol Artif Intell. 2024 Sep;6(5):e240067. doi: 10.1148/ryai.240067.
The diagnostic performance of an artificial intelligence (AI) clinical decision support solution for acute intracranial hemorrhage (ICH) detection was assessed in a large teleradiology practice. The impact on radiologist read times and system efficiency was also quantified. A total of 61 704 consecutive noncontrast head CT examinations were retrospectively evaluated. System performance was calculated along with mean and median read times for CT studies obtained before (baseline, pre-AI period; August 2021 to May 2022) and after (post-AI period; January 2023 to February 2024) AI implementation. The AI solution had a sensitivity of 75.6%, specificity of 92.1%, accuracy of 91.7%, prevalence of 2.70%, and positive predictive value of 21.1%. Of the 56 745 post-AI CT scans with no bleed identified by a radiologist, examinations falsely flagged as suspected ICH by the AI solution ( = 4464) took an average of 9 minutes 40 seconds (median, 8 minutes 7 seconds) to interpret as compared with 8 minutes 25 seconds (median, 6 minutes 48 seconds) for unremarkable CT scans before AI ( = 49 007) ( < .001) and 8 minutes 38 seconds (median, 6 minutes 53 seconds) after AI when ICH was not suspected by the AI solution ( = 52 281) ( < .001). CT scans with no bleed identified by the AI but reported as positive for ICH by the radiologist ( = 384) took an average of 14 minutes 23 seconds (median, 13 minutes 35 seconds) to interpret as compared with 13 minutes 34 seconds (median, 12 minutes 30 seconds) for CT scans correctly reported as a bleed by the AI ( = 1192) ( = .04). With lengthened read times for falsely flagged examinations, system inefficiencies may outweigh the potential benefits of using the tool in a high volume, low prevalence environment. Artificial Intelligence, Intracranial Hemorrhage, Read Time, Report Turnaround Time, System Efficiency © RSNA, 2024.
在一家大型远程放射科实践中,评估了人工智能(AI)临床决策支持解决方案在急性颅内出血(ICH)检测中的诊断性能。还量化了对放射科医生阅读时间和系统效率的影响。回顾性评估了总共 61704 例连续的非对比头部 CT 检查。计算了系统性能以及在 AI 实施之前(基线,AI 前时期;2021 年 8 月至 2022 年 5 月)和之后(AI 后时期;2023 年 1 月至 2024 年 2 月)获得的 CT 研究的平均和中位数阅读时间。AI 解决方案的敏感性为 75.6%,特异性为 92.1%,准确性为 91.7%,患病率为 2.70%,阳性预测值为 21.1%。在 AI 后 56745 例没有放射科医生发现出血的 CT 扫描中,AI 解决方案错误地标记为疑似 ICH 的检查(=4464)的解释时间平均为 9 分 40 秒(中位数,8 分 7 秒),与 AI 前无异常 CT 扫描的 8 分 25 秒(中位数,6 分 48 秒)相比(=49007)(<0.001),并且在 AI 不怀疑 ICH 时,8 分 38 秒(中位数,6 分 53 秒)(=52281)(<0.001)。AI 识别为无出血但放射科医生报告为 ICH 阳性的 CT 扫描(=384)的解释时间平均为 14 分 23 秒(中位数,13 分 35 秒),与 AI 正确报告为出血的 CT 扫描的 13 分 34 秒(中位数,12 分 30 秒)相比(=1192)(=0.04)。由于假阳性检查的阅读时间延长,系统效率可能超过在高容量、低患病率环境中使用该工具的潜在好处。人工智能,颅内出血,阅读时间,报告周转时间,系统效率 © RSNA,2024 年。