文献检索文档翻译深度研究
Suppr Zotero 插件Zotero 插件
邀请有礼套餐&价格历史记录

新学期,新优惠

限时优惠:9月1日-9月22日

30天高级会员仅需29元

1天体验卡首发特惠仅需5.99元

了解详情
不再提醒
插件&应用
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
高级版
套餐订阅购买积分包
AI 工具
文献检索文档翻译深度研究
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2025

人工智能聊天机器人(ChatGPT-4.0和Microsoft Copilot)与正畸专家对常见正畸问题回答的比较分析:患者和正畸医生的评估

Comparative analysis of AI chatbot (ChatGPT-4.0 and Microsoft Copilot) and expert responses to common orthodontic questions: patient and orthodontist evaluations.

作者信息

Salmanpour Farhad, Camcı Hasan, Geniş Ömer

机构信息

Department of Orthodontics, Afyonkarahisar Health Sciences University, Güvenevler, İsmet İnönü Cd. No:4, Afyonkarahisar, Merkez, 03030, Turkey.

出版信息

BMC Oral Health. 2025 Jun 3;25(1):896. doi: 10.1186/s12903-025-06194-w.


DOI:10.1186/s12903-025-06194-w
PMID:40462054
Abstract

OBJECTIVE: The aim of this study was to evaluate the adequacy of responses provided by experts and artificial intelligence-based chatbots (ChatGPT-4.0 and Microsoft Copilot) to frequently asked orthodontic questions, utilizing scores assigned by patients and orthodontists. METHODS: Fifteen questions were randomly selected from the FAQ section of the American Association of Orthodontists (AAO) website, addressing common concerns related to orthodontic treatments, patient care, and post-treatment guidelines. Expert responses, along with those from ChatGPT-4.0 and Microsoft Copilot, were presented in a survey format via Google Forms. Fifty-two orthodontists and 102 patients rated the three responses for each question on a scale from 1 (least adequate) to 10 (most adequate). The findings were analyzed comparatively within and between groups. RESULTS: Expert responses consistently received the highest scores from both patients and orthodontists, particularly in critical areas such as Questions 1, 2, 4, 9, and 11, where they significantly outperformed chatbots (P < 0.05). Patients generally rated expert responses higher than those of chatbots, underscoring the reliability of clinical expertise. However, ChatGPT-4.0 showed competitive performance in some questions, achieving its highest score in Question 14 (8.16 ± 1.24), but scored significantly lower than experts in several key areas (P < 0.05). Microsoft Copilot generally received the lowest scores, although it demonstrated statistically comparable performance to other groups in certain questions, such as Questions 3 and 12 (P > 0.05). CONCLUSIONS: Overall, the scores for ChatGPT-4.0 and Microsoft Copilot were deemed acceptable (6.0 and above). However, both patients and orthodontists generally rated the expert responses as more adequate. This suggests that current current chatbots does not yet match the theoretical adequacy of expert opinions.

摘要

目的:本研究旨在利用患者和正畸医生给出的评分,评估专家以及基于人工智能的聊天机器人(ChatGPT-4.0和Microsoft Copilot)对常见正畸问题的回答是否充分。 方法:从美国正畸医师协会(AAO)网站的常见问题解答部分随机选取15个问题,涉及正畸治疗、患者护理和治疗后指导等常见问题。专家的回答以及ChatGPT-4.0和Microsoft Copilot的回答通过谷歌表单以调查问卷的形式呈现。52名正畸医生和102名患者对每个问题的三种回答按照1分(最不充分)至10分(最充分)的评分标准进行打分。对组内和组间的结果进行比较分析。 结果:专家的回答在患者和正畸医生中始终获得最高分,尤其是在第1、2、4、9和11题等关键领域,专家的回答明显优于聊天机器人(P<0.05)。患者对专家回答的评分普遍高于聊天机器人,这突出了临床专业知识的可靠性。然而,ChatGPT-4.0在某些问题上表现出竞争力,在第14题中获得最高分(8.16±1.24),但在几个关键领域的得分明显低于专家(P<0.05)。Microsoft Copilot的得分通常最低,尽管在某些问题上,如第3题和第12题,其表现与其他组在统计学上相当(P>0.05)。 结论:总体而言,ChatGPT-4.0和Microsoft Copilot的得分被认为是可以接受的(6.0及以上)。然而,患者和正畸医生普遍认为专家的回答更充分。这表明当前的聊天机器人尚未达到专家意见在理论上的充分程度。

相似文献

[1]
Comparative analysis of AI chatbot (ChatGPT-4.0 and Microsoft Copilot) and expert responses to common orthodontic questions: patient and orthodontist evaluations.

BMC Oral Health. 2025-6-3

[2]
Information from digital and human sources: A comparison of chatbot and clinician responses to orthodontic questions.

Am J Orthod Dentofacial Orthop. 2025-5-6

[3]
Can artificial intelligence models serve as patient information consultants in orthodontics?

BMC Med Inform Decis Mak. 2024-7-29

[4]
A Comparison of Prostate Cancer Screening Information Quality on Standard and Advanced Versions of ChatGPT, Google Gemini, and Microsoft Copilot: A Cross-Sectional Study.

Am J Health Promot. 2025-6

[5]
Effectiveness of AI-generated orthodontic treatment plans compared to expert orthodontist recommendations: a cross-sectional pilot study.

Dental Press J Orthod. 2025-3-24

[6]
Performance assessment of artificial intelligence chatbots (ChatGPT-4 and Copilot) for sharing insights on 3D-printed orthodontic appliances: A cross-sectional study.

Int Orthod. 2025-9

[7]
Proficiency, Clarity, and Objectivity of Large Language Models Versus Specialists' Knowledge on COVID-19's Impacts in Pregnancy: Cross-Sectional Pilot Study.

JMIR Form Res. 2025-2-5

[8]
Assessing the Quality of Patient Education Materials on Cardiac Catheterization From Artificial Intelligence Chatbots: An Observational Cross-Sectional Study.

Cureus. 2024-9-23

[9]
Assessment of readability, reliability, and quality of ChatGPT®, BARD®, Gemini®, Copilot®, Perplexity® responses on palliative care.

Medicine (Baltimore). 2024-8-16

[10]
Comparative accuracy of ChatGPT-4, Microsoft Copilot and Google Gemini in the Italian entrance test for healthcare sciences degrees: a cross-sectional study.

BMC Med Educ. 2024-6-26

引用本文的文献

[1]
Reliability of Large Language Model-Based Chatbots Versus Clinicians as Sources of Information on Orthodontics: A Comparative Analysis.

Dent J (Basel). 2025-7-24

本文引用的文献

[1]
Performance of Chat Generative Pretrained Transformer-4.0 in determining labiolingual localization of maxillary impacted canine and presence of resorption in incisors through panoramic radiographs: A retrospective study.

Am J Orthod Dentofacial Orthop. 2025-8

[2]
Large-Language Models in Orthodontics: Assessing Reliability and Validity of ChatGPT in Pretreatment Patient Education.

Cureus. 2024-8-29

[3]
Roles, Users, Benefits, and Limitations of Chatbots in Health Care: Rapid Review.

J Med Internet Res. 2024-7-23

[4]
The performance of artificial intelligence models in generating responses to general orthodontic questions: ChatGPT vs Google Bard.

Am J Orthod Dentofacial Orthop. 2024-6

[5]
Accuracy and Completeness of ChatGPT-Generated Information on Interceptive Orthodontics: A Multicenter Collaborative Study.

J Clin Med. 2024-1-27

[6]
Content analysis of AI-generated (ChatGPT) responses concerning orthodontic clear aligners.

Angle Orthod. 2024-5-1

[7]
Accuracy of ChatGPT-Generated Information on Head and Neck and Oromaxillofacial Surgery: A Multicenter Collaborative Analysis.

Otolaryngol Head Neck Surg. 2024-6

[8]
The Potential Usefulness of ChatGPT in Oral and Maxillofacial Radiology.

Cureus. 2023-7-19

[9]
Artificial Intelligence Discusses the Role of Artificial Intelligence in Translational Medicine: A Interview With ChatGPT.

JACC Basic Transl Sci. 2023-1-18

[10]
Pain management practices in the emergency departments in Turkey.

Turk J Emerg Med. 2021-10-29

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

推荐工具

医学文档翻译智能文献检索