Kuas Caglar, Canakci Mustafa Emin, Acar Nurdan, Kanbakan Altug, Cetin Murat, Gunsoy Ertug
Emergency Department, Eskisehir Osmangazi University, Osmangazi University Meşelik Campus, Osmangazi University Health Practice and Research Hospital, Eskişehir, Turkey.
Emergency Department, Eskisehir Osmangazi University, Osmangazi University Meşelik Campus, Osmangazi University Health Practice and Research Hospital, Eskişehir, Turkey.
J Emerg Med. 2025 Sep;76:17-25. doi: 10.1016/j.jemermed.2025.07.002. Epub 2025 Jul 8.
Poisoning cases involve a wide variety of toxic agents and remain a significant concern for emergency departments. Rapid and accurate intervention is crucial in these cases; however, emergency physicians often face challenges in accessing and applying up-to-date toxicology information in a timely manner. ChatGPT, an AI language model, shows promise as a diagnostic aid in healthcare settings, offering potentially valuable support in the management of toxicological emergencies.
In this study, we aimed to evaluate the potential of ChatGPT in answering toxicology study guide questions, simulating its utility as a decision-support tool.
This study involves an evaluation of ChatGPT's performance in answering toxicology study guide questions from the Study Guide for Goldfrank's Toxicologic Emergencies, designed to simulate its utility as a decision-support tool in toxicological emergencies. ChatGPT's responses were compared with the accuracy rates of responses from medical trainees using the same toxicology study guide questions. This accuracy rate is categorized as human response.
ChatGPT correctly answered 89% of the toxicology questions, outperforming human responders, who had a mean accuracy rate of 56%. However, ChatGPT was less accurate in responding to pediatric and complex case-based questions, highlighting areas where AI models may require further refinement.
The study suggests that ChatGPT has substantial potential as an assistive tool for emergency physicians managing toxicological emergencies, particularly in high-stress and fast-paced environments. Despite its strong performance, the AI model's limitations in handling specific clinical scenarios indicate the need for continuous improvement and careful application in medical practice.
中毒病例涉及多种有毒物质,仍然是急诊科的重大关注点。在这些病例中,快速准确的干预至关重要;然而,急诊医生在及时获取和应用最新毒理学信息方面常常面临挑战。ChatGPT是一种人工智能语言模型,在医疗环境中作为诊断辅助工具显示出前景,在中毒紧急情况的管理中提供潜在的宝贵支持。
在本研究中,我们旨在评估ChatGPT回答毒理学学习指南问题的潜力,模拟其作为决策支持工具的效用。
本研究涉及评估ChatGPT回答《戈德弗兰克毒理学急症学习指南》中毒理学学习指南问题的表现,旨在模拟其在中毒紧急情况中作为决策支持工具的效用。将ChatGPT的回答与使用相同毒理学学习指南问题的医学实习生的回答准确率进行比较。该准确率被归类为人类回答。
ChatGPT正确回答了89%的毒理学问题,优于人类回答者,人类回答者的平均准确率为56%。然而,ChatGPT在回答儿科和基于复杂病例的问题时准确性较低,突出了人工智能模型可能需要进一步改进的领域。
该研究表明,ChatGPT作为急诊医生管理中毒紧急情况的辅助工具具有巨大潜力,特别是在高压力和快节奏的环境中。尽管其表现出色,但人工智能模型在处理特定临床场景方面的局限性表明,在医疗实践中需要持续改进和谨慎应用。