Department of Neurology, Santa Barbara Cottage Hospital, 219, Nogales Ave., Ste. F., Santa Barbara, CA 93105, United States.
Department of Emergency Medicine, Beth Israel Deaconess Medical Center, 1 Deaconess Rd, Boston, MA 02215, United States.
J Stroke Cerebrovasc Dis. 2021 Jul;30(7):105829. doi: 10.1016/j.jstrokecerebrovasdis.2021.105829. Epub 2021 May 11.
To compare physicians' ability to read Alberta Stroke Program Early CT Score (ASPECTS) in patients with a large vessel occlusion within 6 hours of symptom onset when assisted by a machine learning-based automatic software tool, compared with their unassisted score.
50 baseline CT scans selected from two prior studies (CRISP and GAMES-RP) were read by 3 experienced neuroradiologists who were provided access to a follow-up MRI. The average ASPECT score of these reads was used as the reference standard. Two additional neuroradiologists and 6 non-neuroradiologist readers then read the scans both with and without assistance from the software reader-augmentation program and reader improvement was determined. The primary hypothesis was that the agreement between typical readers and the consensus of 3 expert neuroradiologists would be improved with software augmented vs. unassisted reads. Agreement was based on the percentage of the individual ASPECT regions (50 cases, 10 regions each; N=500) where agreement was achieved.
Typical non-neuroradiologist readers agreed with the expert consensus read in 72% of the 500 ASPECTS regions, evaluated without software assistance. The automated software alone agreed in 77%. When the typical readers read the scan in conjunction with the software, agreement improved to 78% (P<0.0001, test of proportions). The software program alone achieved correlations for total ASPECT scores that were similar to the expert readers who had access to the follow-up MRI scan to help enhance the quality of their reads.
Typical readers had statistically significant improvement in their scoring of scans when the scan was read in conjunction with the automated software, achieving agreement rates that were comparable to neuroradiologists.
比较在发病 6 小时内患有大血管闭塞的患者,使用基于机器学习的自动软件工具辅助与不使用辅助工具时,医生读取 Alberta 卒中项目早期 CT 评分(ASPECTS)的能力。
从两项先前研究(CRISP 和 GAMES-RP)中选择了 50 个基线 CT 扫描,由 3 名经验丰富的神经放射科医生进行阅读,他们可以获得后续 MRI。这些读取的平均 ASPECT 评分被用作参考标准。然后,另外 2 名神经放射科医生和 6 名非神经放射科医生阅读了这些扫描,同时使用和不使用软件辅助读取器,并确定了读取器的改进。主要假设是,与未使用软件辅助阅读相比,使用软件辅助阅读可以提高典型读者与 3 名专家神经放射科医生共识的一致性。一致性基于个体 ASPECT 区域的百分比(50 例,每个区域 10 例;N=500)达成一致的区域。
典型的非神经放射科医生读者在未使用软件辅助的情况下,在 500 个 ASPECT 区域中的 72%与专家共识阅读结果一致。单独使用自动软件的情况下,一致性为 77%。当典型读者结合软件阅读扫描时,一致性提高到 78%(P<0.0001,比例检验)。该软件程序本身在总 ASPECT 评分方面的相关性与能够访问后续 MRI 扫描以帮助提高其读取质量的专家读者相似。
当扫描与自动软件一起读取时,典型读者在扫描评分方面具有统计学上的显著提高,达到了与神经放射科医生相当的一致性率。