Suppr超能文献

聊天机器人在牙髓病临床决策支持中的比较性能:一项为期4天的准确性和一致性研究。

Comparative Performance of Chatbots in Endodontic Clinical Decision Support: A 4-Day Accuracy and Consistency Study.

作者信息

Büker Mine, Sümbüllü Meltem, Arslan Hakan

机构信息

Faculty of Dentistry, Department of Endodontics, Mersin University, Mersin, Turkey.

Faculty of Dentistry, Department of Endodontics, Atatürk University, Erzurum, Turkey.

出版信息

Int Dent J. 2025 Jul 27;75(5):100920. doi: 10.1016/j.identj.2025.100920.

Abstract

INTRODUCTION AND AIMS

Despite the use of artificial intelligence, which is increasingly prevalent in healthcare settings, concerns remain regarding its reliability and accuracy. The study assessed the overall, difficulty level-specific, and day-to-day accuracy and consistency of 5 AI chatbots-ChatGPT-3.5, ChatGPT-4.o, Gemini 2.0 Flash, Copilot, and Copilot Pro-in answering clinically relevant endodontic questions.

METHODS

Seventy-six correct/incorrect questions were developed by 2 endodontists and categorized by an expert into 3 difficulty levels (Basic [B]-, Intermediate [I]-, and Advanced [A]- level]. Twenty questions from each difficulty level were selected from a set of 74 validated questions (B, n = 26; I, n = 24; A, n = 24), resulting in a total of 60 questions. The questions were asked of the chatbots over a period of 4 days, at 3 different times each day (morning, afternoon, and evening).

RESULTS

ChatGPT-4.o achieved the highest overall accuracy (82.5%) and perfect performance in the B-level category (95.0%), while Copilot Pro had the lowest accuracy (74.03%). Gemini and ChatGPT-3.5 showed similar overall accuracy. Gemini's accuracy significantly improved over time, whereas significant decreases were noted in the Copilot Pro model across days, and no significant change was detected in both ChatGPT models and Copilot. In the B-level category, while Copilot Pro showed a significant decrease in accuracy rates, and in the B- and I-level categories, Copilot showed a significant increase in accuracy rates over the days. In the A-level category, Gemini demonstrated a significant increase in accuracy rates over the days.

CONCLUSIONS

ChatGPT-4.o demonstrated superior performance, whereas Copilot and Copilot Pro showed insufficient accuracy. ChatGPT-3.5 and Gemini may be acceptable for general queries but require caution in more advanced cases.

CLINICAL RELEVANCE

ChatGPT-4.o demonstrated the highest overall accuracy and consistency in all question categories over 4 days, suggesting its potential as a reliable tool for clinical decision-making.

摘要

引言与目的

尽管人工智能在医疗环境中的应用日益普遍,但人们仍对其可靠性和准确性存在担忧。本研究评估了5个人工智能聊天机器人——ChatGPT-3.5、ChatGPT-4.0、Gemini 2.0 Flash、Copilot和Copilot Pro——在回答临床相关牙髓病问题时的总体、特定难度级别以及日常的准确性和一致性。

方法

2名牙髓病医生编写了76道正误问题,并由一名专家将其分为3个难度级别(基础[B]级、中级[I]级和高级[A]级)。从一组74道经过验证的问题(B级,n = 26;I级,n = 24;A级,n = 24)中,每个难度级别选取20个问题,共60个问题。在4天的时间里,每天3个不同时间(上午、下午和晚上)向聊天机器人提问这些问题。

结果

ChatGPT-4.0的总体准确率最高(82.5%),在B级类别中表现完美(95.0%),而Copilot Pro的准确率最低(74.03%)。Gemini和ChatGPT-3.5的总体准确率相似。Gemini的准确率随时间显著提高,而Copilot Pro模型在不同日期的准确率显著下降,ChatGPT两个模型和Copilot均未检测到显著变化。在B级类别中,Copilot Pro的准确率显著下降,在B级和I级类别中,Copilot的准确率在不同日期显著提高。在A级类别中,Gemini在不同日期的准确率显著提高。

结论

ChatGPT-4.0表现出卓越的性能,而Copilot和Copilot Pro的准确率不足。ChatGPT-3.5和Gemini对于一般查询可能是可接受的,但在更复杂的情况下需要谨慎使用。

临床相关性

ChatGPT-4.0在4天内所有问题类别中表现出最高的总体准确率和一致性,表明其作为临床决策可靠工具的潜力。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验