Han Wenzheng, Wan Chao, Shan Rui, Xu Xudong, Chen Guang, Zhou Wenjie, Yang Yuxuan, Feng Gang, Li Xiaoning, Yang Jianghua, Jin Kai, Chen Qing
The First Affiliated Hospital, Wannan Medical College, Wuhu, Anhui, China.
Wannan Medical College, Wuhu, Anhui, China.
Clin Chem Lab Med. 2025 Apr 21. doi: 10.1515/cclm-2025-0089.
Accurate medical laboratory reports are essential for delivering high-quality healthcare. Recently, advanced artificial intelligence models, such as those in the ChatGPT series, have shown considerable promise in this domain. This study assessed the performance of specific GPT models-namely, 4o, o1, and o1 mini-in identifying errors within medical laboratory reports and in providing treatment recommendations.
In this retrospective study, 86 medical laboratory reports of Nucleic acid test report for the seven upper respiratory tract pathogens were compiled. There were 285 errors from four common error categories intentionally and randomly introduced into reports and generated 86 incorrected reports. GPT models were tasked with detecting these errors, using three senior medical laboratory scientists (SMLS) and three medical laboratory interns (MLI) as control groups. Additionally, GPT models were tasked with generating accurate and reliable treatment recommendations following positive test outcomes based on 86 corrected reports. χ2 tests, Kruskal-Wallis tests, and Wilcoxon tests were used for statistical analysis where appropriate.
In comparison with SMLS or MLI, GPT models accurately detected three error types, and the average detection rates of the three GPT models were 88.9 %(omission), 91.6 % (time sequence), and 91.7 % (the same individual acted both as the inspector and the reviewer). However, the average detection rate for errors in the result input format by the three GPT models was only 51.9 %, indicating a relatively poor performance in this aspect. GPT models exhibited substantial to almost perfect agreement with SMLS in detecting total errors (kappa [min, max]: 0.778, 0.837). However, the agreement between GPT models and MLI was moderately lower (kappa [min, max]: 0.632, 0.696). When it comes to reading all 86 reports, GPT models showed obviously reduced reading time compared with SMLS or MLI (all p<0.001). Notably, our study also found the GPT-o1 mini model had better consistency of error identification than the GPT-o1 model, which was better than that of the GPT-4o model. The pairwise comparisons of the same GPT model's outputs across three repeated runs showed almost perfect agreement (kappa [min, max]: 0.912, 0.996). GPT-o1 mini showed obviously reduced reading time compared with GPT-4o or GPT-o1(all p<0.001). Additionally, GPT-o1 significantly outperformed GPT-4o or o1 mini in providing accurate and reliable treatment recommendations (all p<0.0001).
The detection capability of some of medical laboratory report errors and the accuracy and reliability of treatment recommendations of GPT models was competent, especially, potentially reducing work hours and enhancing clinical decision-making.
准确的医学实验室报告对于提供高质量医疗保健至关重要。最近,先进的人工智能模型,如ChatGPT系列中的模型,在这一领域显示出了巨大的潜力。本研究评估了特定GPT模型——即4o、o1和o1 mini——在识别医学实验室报告中的错误以及提供治疗建议方面的性能。
在这项回顾性研究中,收集了86份七种上呼吸道病原体的核酸检测报告。故意且随机地将来自四个常见错误类别的285个错误引入报告中,生成了86份有错误的报告。让GPT模型负责检测这些错误,将三名高级医学实验室科学家(SMLS)和三名医学实验室实习生(MLI)作为对照组。此外,让GPT模型根据86份纠正后的报告,在检测结果呈阳性后生成准确可靠的治疗建议。在适当的情况下,使用χ2检验、Kruskal-Wallis检验和Wilcoxon检验进行统计分析。
与SMLS或MLI相比,GPT模型准确地检测出了三种错误类型,三种GPT模型的平均检测率分别为88.9%(遗漏)、91.6%(时间顺序)和91.7%(同一个人既担任检查员又担任审核员)。然而,三种GPT模型对结果输入格式错误的平均检测率仅为51.9%,表明在这方面表现相对较差。在检测总错误方面,GPT模型与SMLS表现出高度到几乎完美的一致性(kappa[最小值,最大值]:0.778,0.837)。然而,GPT模型与MLI之间的一致性略低(kappa[最小值,最大值]:0.632,0.696)。在阅读所有86份报告时,GPT模型的阅读时间明显少于SMLS或MLI(所有p<0.001)。值得注意的是,我们的研究还发现,GPT-o1 mini模型在错误识别方面的一致性优于GPT-o1模型,而GPT-o1模型又优于GPT-4o模型。同一GPT模型在三次重复运行中的输出进行两两比较,显示出几乎完美的一致性(kappa[最小值,最大值]:0.912,0.996)。与GPT-4o或GPT-o1相比,GPT-o1 mini的阅读时间明显减少(所有p<0.001)。此外,在提供准确可靠的治疗建议方面,GPT-o1明显优于GPT-4o或o1 mini(所有p<0.0001)。
GPT模型对一些医学实验室报告错误的检测能力以及治疗建议的准确性和可靠性是合格的,特别是可能减少工作时间并增强临床决策。