Mahmoud Reema, Shuster Amir, Kleinman Shlomi, Arbel Shimrit, Ianculovici Clariel, Peleg Oren
Resident, Department of Oral and Maxillofacial Surgery, Tel-Aviv Sourasky Medical Center, Tel Aviv, Israel.
Senior Surgeon, Department of Oral and Maxillofacial Surgery, Tel-Aviv Sourasky Medical Center, Tel Aviv, Israel; Senior Surgeon, Department of Oral and Maxillofacial Surgery, Goldschleger School of Dental Medicine, Tel-Aviv University, Tel-Aviv, Israel.
J Oral Maxillofac Surg. 2025 Mar;83(3):382-389. doi: 10.1016/j.joms.2024.11.007. Epub 2024 Nov 19.
While artificial intelligence has significantly impacted medicine, the application of large language models (LLMs) in oral and maxillofacial surgery (OMS) remains underexplored.
This study aimed to measure and compare the accuracy of 4 leading LLMs on OMS board examination questions and to identify specific areas for improvement.
STUDY DESIGN, SETTING, AND SAMPLE: An in-silico cross-sectional study was conducted to evaluate 4 artificial intelligence chatbots on 714 OMS board examination questions.
The predictor variable was the LLM used - LLM 1 (Generative Pretrained Transformer 4o [GPT-4o], OpenAI, San Francisco, CA), LLM 2 (Generative Pretrained Transformer 3.5 [GPT-3.5], OpenAI, San Francisco, CA), LLM 3 (Gemini, Google, Mountain View, CA), and LLM 4 (Copilot, Microsoft, Redmond, WA).
The primary outcome variable was accuracy, defined as the percentage of correct answers provided by each LLM. Secondary outcomes included the LLMs' ability to correct errors on subsequent attempts and their performance across 11 specific OMS subject domains: medicine and anesthesia, dentoalveolar and implant surgery, maxillofacial trauma, maxillofacial infections, maxillofacial pathology, salivary glands, oncology, maxillofacial reconstruction, temporomandibular joint anatomy and pathology, craniofacial and clefts, and orthognathic surgery.
No additional covariates were considered.
Statistical analysis included one-way ANOVA and post hoc Tukey honest significant difference (HSD) to compare performance across chatbots. χ tests were used to assess response consistency and error correction, with statistical significance set at P < .05.
LLM 1 achieved the highest accuracy with an average score of 83.69%, statistically significantly outperforming LLM 3 (66.85%, P = .002), LLM 2 (64.83%, P = .001), and LLM 4 (62.18%, P < .001). Across the 11 OMS subject domains, LLM 1 consistently had the highest accuracy rates. LLM 1 also corrected 98.2% of errors, while LLM 2 corrected 93.44%, both statistically significantly higher than LLM 4 (29.26%) and LLM 3 (70.71%) (P < .001).
LLM 1 (GPT-4o) significantly outperformed other models in both accuracy and error correction, indicating its strong potential as a tool for enhancing OMS education. However, the variability in performance across different domains highlights the need for ongoing refinement and continued evaluation to integrate these LLMs more effectively into the OMS field.
虽然人工智能对医学产生了重大影响,但大语言模型(LLMs)在口腔颌面外科(OMS)中的应用仍未得到充分探索。
本研究旨在测量和比较4种领先的大语言模型在口腔颌面外科委员会考试问题上的准确性,并确定需要改进的具体领域。
研究设计、设置和样本:进行了一项计算机模拟横断面研究,以评估4个人工智能聊天机器人对714道口腔颌面外科委员会考试问题的回答。
预测变量是所使用的大语言模型——大语言模型1(生成式预训练变换器4.0 [GPT - 4.0],OpenAI,旧金山,加利福尼亚州)、大语言模型2(生成式预训练变换器3.5 [GPT - 3.5],OpenAI,旧金山,加利福尼亚州)、大语言模型3(Gemini,谷歌,山景城,加利福尼亚州)和大语言模型4(Copilot,微软,雷德蒙德,华盛顿州)。
主要结果变量是准确性,定义为每个大语言模型提供的正确答案的百分比。次要结果包括大语言模型在后续尝试中纠正错误的能力以及它们在11个特定口腔颌面外科主题领域的表现:医学与麻醉、牙体牙髓与种植外科、颌面创伤、颌面感染、颌面病理学、唾液腺、肿瘤学、颌面重建、颞下颌关节解剖与病理学、颅面与腭裂以及正颌外科。
未考虑其他协变量。
统计分析包括单向方差分析和事后Tukey诚实显著差异(HSD)检验,以比较聊天机器人之间的性能。χ检验用于评估回答的一致性和错误纠正,统计学显著性设定为P <.05。
大语言模型1的准确性最高,平均得分83.69%,在统计学上显著优于大语言模型3(66.85%,P =.002)、大语言模型2(64.83%,P =.001)和大语言模型4(62.18%,P <.001)。在11个口腔颌面外科主题领域中,大语言模型1始终具有最高的准确率。大语言模型1还纠正了98.2%的错误,而大语言模型2纠正了93.44%,两者在统计学上均显著高于大语言模型4(29.26%)和大语言模型3(70.71%)(P <.001)。
大语言模型1(GPT - 4.0)在准确性和错误纠正方面均显著优于其他模型,表明其作为增强口腔颌面外科教育工具的强大潜力。然而,不同领域性能的差异凸显了持续改进和持续评估的必要性,以便更有效地将这些大语言模型整合到口腔颌面外科领域。