Tilman J Fertitta Family College of Medicine, University of Houston, Houston, TX, United States.
Humana Integrated Health Sciences Institute, University of Houston, Houston, TX, United States.
J Med Internet Res. 2024 Apr 22;26:e55037. doi: 10.2196/55037.
ChatGPT is the most advanced large language model to date, with prior iterations having passed medical licensing examinations, providing clinical decision support, and improved diagnostics. Although limited, past studies of ChatGPT's performance found that artificial intelligence could pass the American Heart Association's advanced cardiovascular life support (ACLS) examinations with modifications. ChatGPT's accuracy has not been studied in more complex clinical scenarios. As heart disease and cardiac arrest remain leading causes of morbidity and mortality in the United States, finding technologies that help increase adherence to ACLS algorithms, which improves survival outcomes, is critical.
This study aims to examine the accuracy of ChatGPT in following ACLS guidelines for bradycardia and cardiac arrest.
We evaluated the accuracy of ChatGPT's responses to 2 simulations based on the 2020 American Heart Association ACLS guidelines with 3 primary outcomes of interest: the mean individual step accuracy, the accuracy score per simulation attempt, and the accuracy score for each algorithm. For each simulation step, ChatGPT was scored for correctness (1 point) or incorrectness (0 points). Each simulation was conducted 20 times.
ChatGPT's median accuracy for each step was 85% (IQR 40%-100%) for cardiac arrest and 30% (IQR 13%-81%) for bradycardia. ChatGPT's median accuracy over 20 simulation attempts for cardiac arrest was 69% (IQR 67%-74%) and for bradycardia was 42% (IQR 33%-50%). We found that ChatGPT's outputs varied despite consistent input, the same actions were persistently missed, repetitive overemphasis hindered guidance, and erroneous medication information was presented.
This study highlights the need for consistent and reliable guidance to prevent potential medical errors and optimize the application of ChatGPT to enhance its reliability and effectiveness in clinical practice.
ChatGPT 是迄今为止最先进的大型语言模型,其早期版本已经通过了医学执照考试,提供了临床决策支持,并改进了诊断。尽管有限,但过去对 ChatGPT 性能的研究发现,人工智能可以通过美国心脏协会的高级心血管生命支持 (ACLS) 考试,只需进行一些修改。ChatGPT 在更复杂的临床场景中的准确性尚未得到研究。由于心脏病和心搏骤停仍然是美国发病率和死亡率的主要原因,寻找有助于提高对 ACLS 算法的依从性的技术至关重要,因为这可以提高生存结果。
本研究旨在检查 ChatGPT 在遵循 ACLS 心动过缓和心搏骤停指南方面的准确性。
我们评估了 ChatGPT 对基于 2020 年美国心脏协会 ACLS 指南的 2 个模拟的响应准确性,有 3 个主要关注的结果:每个模拟尝试的平均个体步骤准确性、准确性评分和每个算法的准确性评分。对于每个模拟步骤,ChatGPT 的得分为正确(1 分)或不正确(0 分)。每个模拟进行了 20 次。
ChatGPT 对每个步骤的中位数准确性为心搏骤停时为 85%(IQR 40%-100%),心动过缓时为 30%(IQR 13%-81%)。ChatGPT 在 20 次模拟尝试中的中位数准确性为心搏骤停时为 69%(IQR 67%-74%),心动过缓时为 42%(IQR 33%-50%)。我们发现,尽管输入一致,但 ChatGPT 的输出仍然存在差异,相同的操作始终被忽略,重复的过分强调会阻碍指导,并且会给出错误的药物信息。
本研究强调了需要一致和可靠的指导,以防止潜在的医疗错误,并优化 ChatGPT 的应用,以提高其在临床实践中的可靠性和有效性。