Sorel Makara, Gurrala Chaitanya, Tadinada Aditya
Oral and Maxillofacial Radiology, University of Connecticut (UConn) School of Dental Medicine, Farmington, USA.
Orthodontics and Dentofacial Orthopedics, University of Connecticut (UConn) School of Dental Medicine, Farmington, USA.
Cureus. 2025 Jul 31;17(7):e89149. doi: 10.7759/cureus.89149. eCollection 2025 Jul.
Background and aim Orthodontic treatment planning is a complex process requiring a detailed understanding of dental, skeletal, and soft tissue relationships. Traditionally, treatment decisions are made through clinical expertise and evidence-based guidelines. However, the recent evolution of AI, particularly large language models (LLMs), has warranted an evaluation of their capabilities in streamlining clinical workflows. The aim of this study was to evaluate the proficiency and effectiveness of AI-based LLMs, specifically OpenAI's ChatGPT-4o and Google's Gemini 2.0 Flash Experimental (free version), in generating orthodontic treatment plans based on real clinical cases. Materials and methods Ten published orthodontic case reports from reputed peer-reviewed journals were selected for the study and summarized into standardized clinical inputs, including patient age, occlusal relationships, skeletal and dental findings, and radiographic observations. These inputs were submitted to ChatGPT-4o and Gemini 2.0 Flash Experimental (free version) with prompts to generate extremely detailed, comprehensive treatment plans. The outputs were evaluated independently by two experienced orthodontists and one orthodontic resident using a four-point ordinal scale assessing clinical accuracy, completeness, and relevance of the treatment plan. Inter-rater reliability was assessed using Krippendorff's alpha. Results ChatGPT-4o produced treatment plans with higher clinical alignment and evaluator consensus, as indicated by Krippendorff's alpha (α = 0.935), while Gemini's plans showed greater variability and moderate agreement (α = 0.692). ChatGPT generated orthodontic treatment plans that incorporated more relevant clinical details and demonstrated stronger alignment with evidence-based standards, as assessed by the orthodontic reviewers. In contrast, Gemini generated treatment plans based on minimally accurate facts. Conclusion LLMs such as ChatGPT-4o and Gemini 2.0 Flash Experimental (free version) demonstrate potential as valuable complementary tools in orthodontic treatment planning, especially in routine cases, but do not appear to have the ability to replace clinical expertise.
背景与目的 正畸治疗计划是一个复杂的过程,需要对牙齿、骨骼和软组织关系有详细的了解。传统上,治疗决策是通过临床专业知识和循证指南做出的。然而,人工智能的最新发展,尤其是大语言模型(LLMs),使得有必要评估它们在简化临床工作流程方面的能力。本研究的目的是评估基于人工智能的大语言模型,特别是OpenAI的ChatGPT-4o和谷歌的Gemini 2.0 Flash Experimental(免费版),在基于真实临床病例生成正畸治疗计划方面的熟练度和有效性。
材料与方法 从著名的同行评审期刊中选取了10篇已发表的正畸病例报告用于本研究,并将其总结为标准化的临床输入信息,包括患者年龄、咬合关系、骨骼和牙齿检查结果以及影像学观察。这些输入信息被提交给ChatGPT-4o和Gemini 2.0 Flash Experimental(免费版),并给出提示以生成极其详细、全面的治疗计划。由两名经验丰富的正畸医生和一名正畸住院医师使用四点序数量表独立评估输出结果,该量表用于评估治疗计划的临床准确性、完整性和相关性。使用Krippendorff's alpha评估评分者间的可靠性。
结果 如Krippendorff's alpha(α = 0.935)所示,ChatGPT-4o生成的治疗计划具有更高的临床一致性和评估者共识,而Gemini生成的计划显示出更大的变异性和中等程度的一致性(α = 0.692)。正畸评审人员评估发现,ChatGPT生成的正畸治疗计划纳入了更多相关临床细节,并与循证标准表现出更强的一致性。相比之下,Gemini生成的治疗计划基于最少的准确事实。
结论 ChatGPT-4o和Gemini 2.0 Flash Experimental(免费版)等大语言模型在正畸治疗计划中显示出作为有价值的辅助工具的潜力,尤其是在常规病例中,但似乎没有能力取代临床专业知识。