Ngoc Nguyen Oanh, Amin Doaa, Bennett James, Hetlevik Øystein, Malik Sara, Tout Andrew, Vornhagen Heike, Vellinga Akke
CARA Network, School of Public Health, Physiotherapy and Sports Science, University College Dublin, Dublin, Ireland.
NIHR In Practice Fellow, Hull York Medical School, University of Hull, Hull HU6 7RX, UK.
J Antimicrob Chemother. 2025 May 2;80(5):1324-1330. doi: 10.1093/jac/dkaf077.
Large language models (LLMs) are becoming ubiquitous and widely implemented. LLMs could also be used for diagnosis and treatment. National antibiotic prescribing guidelines are customized and informed by local laboratory data on antimicrobial resistance.
Based on 24 vignettes with information on type of infection, gender, age group and comorbidities, GPs and LLMs were prompted to provide a treatment. Four countries (Ireland, UK, USA and Norway) were included and a GP from each country and six LLMs (ChatGPT, Gemini, Copilot, Mistral AI, Claude and Llama 3.1) were provided with the vignettes, including their location (country). Responses were compared with the country's national prescribing guidelines. In addition, limitations of LLMs such as hallucination, toxicity and data leakage were assessed.
GPs' answers to the vignettes showed high accuracy in relation to diagnosis (96%-100%) and yes/no antibiotic prescribing (83%-92%). GPs referenced (100%) and prescribed (58%-92%) according to national guidelines, but dose/duration of treatment was less accurate (50%-75%). Overall, the GPs' accuracy had a mean of 74%. LLMs scored high in relation to diagnosis (92%-100%), antibiotic prescribing (88%-100%) and the choice of antibiotic (59%-100%) but correct referencing often failed (38%-96%), in particular for the Norwegian guidelines (0%-13%). Data leakage was shown to be an issue as personal information was repeated in the models' responses to the vignettes.
LLMs may be safe to guide antibiotic prescribing in general practice. However, to interpret vignettes, apply national guidelines and prescribe the right dose and duration, GPs remain best placed.
大语言模型(LLMs)正变得无处不在并得到广泛应用。大语言模型也可用于诊断和治疗。国家抗生素处方指南是根据当地关于抗菌药物耐药性的实验室数据制定并提供依据的。
基于24个包含感染类型、性别、年龄组和合并症信息的病例 vignettes,促使全科医生(GPs)和大语言模型提供治疗方案。纳入了四个国家(爱尔兰、英国、美国和挪威),并向每个国家的一名全科医生和六个大语言模型(ChatGPT、Gemini、Copilot、Mistral AI、Claude 和 Llama 3.1)提供了病例 vignettes,包括其所在位置(国家)。将回答与该国的国家处方指南进行比较。此外,还评估了大语言模型的局限性,如幻觉、毒性和数据泄露。
全科医生对病例 vignettes 的回答在诊断方面显示出较高的准确性(96%-100%)以及是否开具抗生素处方方面(83%-92%)。全科医生根据国家指南进行参考(100%)和开处方(58%-92%),但治疗剂量/疗程的准确性较低(50%-75%)。总体而言,全科医生的准确性平均为74%。大语言模型在诊断(92%-100%)、抗生素处方(88%-100%)和抗生素选择(59%-100%)方面得分较高,但正确参考往往失败(38%-96%),特别是对于挪威指南(0%-13%)。由于个人信息在模型对病例 vignettes 的回答中被重复,数据泄露被证明是一个问题。
在一般实践中,大语言模型可能可以安全地指导抗生素处方。然而,对于解读病例 vignettes、应用国家指南以及开具正确的剂量和疗程,全科医生仍然是最合适的人选。