Salmanpour Farhad, Camcı Hasan, Geniş Ömer
Department of Orthodontics, Afyonkarahisar Health Sciences University, Güvenevler, İsmet İnönü Cd. No:4, Afyonkarahisar, Merkez, 03030, Turkey.
BMC Oral Health. 2025 Jun 3;25(1):896. doi: 10.1186/s12903-025-06194-w.
OBJECTIVE: The aim of this study was to evaluate the adequacy of responses provided by experts and artificial intelligence-based chatbots (ChatGPT-4.0 and Microsoft Copilot) to frequently asked orthodontic questions, utilizing scores assigned by patients and orthodontists. METHODS: Fifteen questions were randomly selected from the FAQ section of the American Association of Orthodontists (AAO) website, addressing common concerns related to orthodontic treatments, patient care, and post-treatment guidelines. Expert responses, along with those from ChatGPT-4.0 and Microsoft Copilot, were presented in a survey format via Google Forms. Fifty-two orthodontists and 102 patients rated the three responses for each question on a scale from 1 (least adequate) to 10 (most adequate). The findings were analyzed comparatively within and between groups. RESULTS: Expert responses consistently received the highest scores from both patients and orthodontists, particularly in critical areas such as Questions 1, 2, 4, 9, and 11, where they significantly outperformed chatbots (P < 0.05). Patients generally rated expert responses higher than those of chatbots, underscoring the reliability of clinical expertise. However, ChatGPT-4.0 showed competitive performance in some questions, achieving its highest score in Question 14 (8.16 ± 1.24), but scored significantly lower than experts in several key areas (P < 0.05). Microsoft Copilot generally received the lowest scores, although it demonstrated statistically comparable performance to other groups in certain questions, such as Questions 3 and 12 (P > 0.05). CONCLUSIONS: Overall, the scores for ChatGPT-4.0 and Microsoft Copilot were deemed acceptable (6.0 and above). However, both patients and orthodontists generally rated the expert responses as more adequate. This suggests that current current chatbots does not yet match the theoretical adequacy of expert opinions.
目的:本研究旨在利用患者和正畸医生给出的评分,评估专家以及基于人工智能的聊天机器人(ChatGPT-4.0和Microsoft Copilot)对常见正畸问题的回答是否充分。 方法:从美国正畸医师协会(AAO)网站的常见问题解答部分随机选取15个问题,涉及正畸治疗、患者护理和治疗后指导等常见问题。专家的回答以及ChatGPT-4.0和Microsoft Copilot的回答通过谷歌表单以调查问卷的形式呈现。52名正畸医生和102名患者对每个问题的三种回答按照1分(最不充分)至10分(最充分)的评分标准进行打分。对组内和组间的结果进行比较分析。 结果:专家的回答在患者和正畸医生中始终获得最高分,尤其是在第1、2、4、9和11题等关键领域,专家的回答明显优于聊天机器人(P<0.05)。患者对专家回答的评分普遍高于聊天机器人,这突出了临床专业知识的可靠性。然而,ChatGPT-4.0在某些问题上表现出竞争力,在第14题中获得最高分(8.16±1.24),但在几个关键领域的得分明显低于专家(P<0.05)。Microsoft Copilot的得分通常最低,尽管在某些问题上,如第3题和第12题,其表现与其他组在统计学上相当(P>0.05)。 结论:总体而言,ChatGPT-4.0和Microsoft Copilot的得分被认为是可以接受的(6.0及以上)。然而,患者和正畸医生普遍认为专家的回答更充分。这表明当前的聊天机器人尚未达到专家意见在理论上的充分程度。
Am J Orthod Dentofacial Orthop. 2025-5-6
BMC Med Inform Decis Mak. 2024-7-29
J Med Internet Res. 2024-7-23
Am J Orthod Dentofacial Orthop. 2024-6
Otolaryngol Head Neck Surg. 2024-6
JACC Basic Transl Sci. 2023-1-18
Turk J Emerg Med. 2021-10-29