Rewthamrongsris Paak, Burapacheep Jirayu, Trachoo Vorapat, Porntaveetus Thantrira
Department of Anatomy, Faculty of Dentistry, Chulalongkorn University, Bangkok, Thailand.
Stanford University, Stanford, California, USA.
Int Dent J. 2025 Feb;75(1):206-212. doi: 10.1016/j.identj.2024.09.033. Epub 2024 Oct 12.
Infective endocarditis (IE) is a serious, life-threatening condition requiring antibiotic prophylaxis for high-risk individuals undergoing invasive dental procedures. As LLMs are rapidly adopted by dental professionals for their efficiency and accessibility, assessing their accuracy in answering critical questions about antibiotic prophylaxis for IE prevention is crucial.
Twenty-eight true/false questions based on the 2021 American Heart Association (AHA) guidelines for IE were posed to 7 popular LLMs. Each model underwent five independent runs per question using two prompt strategies: a pre-prompt as an experienced dentist and without a pre-prompt. Inter-model comparisons utilised the Kruskal-Wallis test, followed by post-hoc pairwise comparisons using Prism 10 software.
Significant differences in accuracy were observed among the LLMs. All LLMs had a narrower confidence interval with a pre-prompt, and most, except Claude 3 Opus, showed improved performance. GPT-4o had the highest accuracy (80% with a pre-prompt, 78.57% without), followed by Gemini 1.5 Pro (78.57% and 77.86%) and Claude 3 Opus (75.71% and 77.14%). Gemini 1.5 Flash had the lowest accuracy (68.57% and 63.57%). Without a pre-prompt, Gemini 1.5 Flash's accuracy was significantly lower than Claude 3 Opus, Gemini 1.5 Pro, and GPT-4o. With a pre-prompt, Gemini 1.5 Flash and Claude 3.5 were significantly less accurate than Gemini 1.5 Pro and GPT-4o. None of the LLMs met the commonly used benchmark scores. All models provided both correct and incorrect answers randomly, except Claude 3.5 Sonnet with a pre-prompt, which consistently gave incorrect answers to eight questions across five runs.
LLMs like GPT-4o show promise for retrieving AHA-IE guideline information, achieving up to 80% accuracy. However, complex medical questions may still pose a challenge. Pre-prompts offer a potential solution, and domain-specific training is essential for optimizing LLM performance in healthcare, especially with the emergence of models with increased token limits.
感染性心内膜炎(IE)是一种严重的、危及生命的疾病,需要对接受侵入性牙科手术的高危个体进行抗生素预防。由于大型语言模型(LLMs)因其效率和易用性而被牙科专业人员迅速采用,评估它们在回答有关预防IE的抗生素预防关键问题时的准确性至关重要。
根据2021年美国心脏协会(AHA)的IE指南提出了28道是非题,向7个流行的大型语言模型提问。每个模型针对每个问题使用两种提示策略进行五次独立运行:一种是作为经验丰富的牙医的预提示,另一种是无预提示。模型间比较采用Kruskal-Wallis检验,随后使用Prism 10软件进行事后成对比较。
在大型语言模型之间观察到准确性存在显著差异。所有大型语言模型在有预提示时的置信区间都更窄,除了Claude 3 Opus外,大多数模型的性能都有所提高。GPT-4o的准确性最高(有预提示时为80%,无预提示时为78.57%),其次是Gemini 1.5 Pro(78.57%和77.86%)和Claude 3 Opus(75.71%和77.14%)。Gemini 1.5 Flash的准确性最低(68.57%和63.57%)。无预提示时,Gemini 1.5 Flash的准确性显著低于Claude 3 Opus、Gemini 1.5 Pro和GPT-4o。有预提示时,Gemini 1.5 Flash和Claude 3.5的准确性显著低于Gemini 1.5 Pro和GPT-4o。没有一个大型语言模型达到常用的基准分数。所有模型都随机给出了正确和错误的答案,除了有预提示的Claude 3.5 Sonnet,它在五次运行中对八个问题一直给出错误答案。
像GPT-4o这样的大型语言模型在检索AHA-IE指南信息方面显示出前景,准确率高达80%。然而,复杂的医学问题可能仍然构成挑战。预提示提供了一种潜在的解决方案,特定领域的训练对于优化大型语言模型在医疗保健中的性能至关重要,特别是随着具有增加令牌限制的模型的出现。