• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

ChatGPT-4作为普通牙医基于证据决策的辅助工具:一项观察性可行性研究。

ChatGPT-4 as an Assistant for Evidence-Based Decision-Making Among General Dentists: An Observational Feasibility Study.

作者信息

Shiva Shankar Bugude, Mohan Sasankoti

机构信息

Periodontology, Qassim University, Buraydah, SAU.

Oral Medicine and Radiology, Hope Health Inc, Florence, USA.

出版信息

Cureus. 2025 Feb 24;17(2):e79556. doi: 10.7759/cureus.79556. eCollection 2025 Feb.

DOI:10.7759/cureus.79556
PMID:40012697
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11859412/
Abstract

Background Evidence-based decision-making (EBDM) is essential in contemporary dentistry. However, navigating the extensive and constantly evolving scientific literature can be challenging. Large language models (LLMs), such as ChatGPT-4, have the potential to transform EBDM by analyzing vast datasets and extracting critical information, thereby significantly reducing the time required to find evidence. This observational feasibility study investigates ChatGPT-4's potential in dental EBDM, focusing on its capabilities, strengths, and limitations. Materials and methods In this observational feasibility study, two independent examiners conducted interactive sessions with ChatGPT-4. Five clinical scenarios were explored using the Google Chrome web browser, accessing publicly available scientific evidence from Cochrane, ADA, and PubMed. This approach ensured compliance with the Cochrane guidelines for EBDM. Two independent dentists engaged with ChatGPT-4 in simulated real-life clinical scenarios to seek scientific information. The output from ChatGPT-4 for each scenario was assessed based on predetermined criteria. Its responses were evaluated for accuracy, relevance, efficiency, actionability, and ethical considerations using the ChatGPT-4 Response Scoring System (CRSS) and the ChatGPT-4 Generative Ability Matrix (C-GAM). Results ChatGPT-4 demonstrated consistent performance across all five clinical scenarios, achieving a C-GAM score of 46.4% and a CRSS score of 12 out of 28. It effectively identified relevant sources of evidence and provided concise summaries, potentially saving valuable time and enhancing access to information. No significant differences in scores were found when the responses to all clinical scenarios were analyzed independently by the two researchers. However, a notable limitation was its inability to provide specific web links directing users to relevant scientific articles. Additionally, while ChatGPT-4 offered suggestions for incorporating the latest scientific publications into decision-making, it could not generate direct links to these articles. Conclusion Despite its current limitations, ChatGPT-4, as a generative AI, can assist clinicians in making evidence-based decisions. It can save time compared to conventional search engines. Ethical considerations must be prioritized in training these models to ensure that clinicians make responsible, evidence-based decisions rather than relying solely on specific evidence statements provided by ChatGPT-4. This model shows its potential as an AI tool for EBDM in dentistry. Further development and training could address existing limitations and enhance its effectiveness; however, clinicians must retain ultimate responsibility for informed decisions, necessitating expertise and critical evaluation of the evidence presented.

摘要

背景 循证决策(EBDM)在当代牙科领域至关重要。然而,在浩如烟海且不断发展的科学文献中查找信息可能具有挑战性。诸如ChatGPT-4之类的大型语言模型(LLMs)有潜力通过分析海量数据集并提取关键信息来改变循证决策,从而显著减少查找证据所需的时间。这项观察性可行性研究调查了ChatGPT-4在牙科循证决策中的潜力,重点关注其能力、优势和局限性。

材料与方法 在这项观察性可行性研究中,两名独立的审查员与ChatGPT-4进行了交互会话。使用谷歌浏览器探索了五个临床场景,从考科蓝图书馆(Cochrane)、美国牙科协会(ADA)和美国国立医学图书馆(PubMed)获取公开可用的科学证据。这种方法确保符合考科蓝循证决策指南。两名独立的牙医在模拟的现实临床场景中与ChatGPT-4互动,以寻求科学信息。根据预先确定的标准评估ChatGPT-4对每个场景的输出。使用ChatGPT-4响应评分系统(CRSS)和ChatGPT-4生成能力矩阵(C-GAM)评估其回答的准确性、相关性、效率、可操作性和伦理考量。

结果 ChatGPT-4在所有五个临床场景中表现一致,C-GAM得分为46.4%,CRSS得分为28分中的12分。它有效地识别了相关证据来源并提供了简洁的总结,可能节省宝贵的时间并增加信息获取途径。两位研究人员独立分析对所有临床场景的回答时,得分没有显著差异。然而,一个明显的局限性是它无法提供指向相关科学文章的具体网页链接。此外,虽然ChatGPT-4提供了将最新科学出版物纳入决策的建议,但它无法生成这些文章的直接链接。

结论 尽管ChatGPT-4目前存在局限性,但作为一种生成式人工智能,它可以帮助临床医生做出循证决策。与传统搜索引擎相比,它可以节省时间。在训练这些模型时,必须优先考虑伦理考量,以确保临床医生做出负责任的循证决策,而不是仅仅依赖ChatGPT-4提供的特定证据陈述。该模型显示出其作为牙科循证决策人工智能工具的潜力。进一步的开发和训练可以解决现有局限性并提高其有效性;然而,临床医生必须对明智的决策承担最终责任,这需要专业知识和对所提供证据的批判性评估。

相似文献

1
ChatGPT-4 as an Assistant for Evidence-Based Decision-Making Among General Dentists: An Observational Feasibility Study.ChatGPT-4作为普通牙医基于证据决策的辅助工具:一项观察性可行性研究。
Cureus. 2025 Feb 24;17(2):e79556. doi: 10.7759/cureus.79556. eCollection 2025 Feb.
2
Evaluation of the Performance of Generative AI Large Language Models ChatGPT, Google Bard, and Microsoft Bing Chat in Supporting Evidence-Based Dentistry: Comparative Mixed Methods Study.评估生成式 AI 大语言模型 ChatGPT、Google Bard 和 Microsoft Bing Chat 在支持循证牙科方面的性能:比较混合方法研究。
J Med Internet Res. 2023 Dec 28;25:e51580. doi: 10.2196/51580.
3
PICOT questions and search strategies formulation: A novel approach using artificial intelligence automation.PICOT问题与检索策略制定:一种使用人工智能自动化的新方法。
J Nurs Scholarsh. 2025 Jan;57(1):5-16. doi: 10.1111/jnu.13036. Epub 2024 Nov 24.
4
Evidence-based potential of generative artificial intelligence large language models in orthodontics: a comparative study of ChatGPT, Google Bard, and Microsoft Bing.生成式人工智能大语言模型在正畸学中的循证潜力:ChatGPT、谷歌巴德和微软必应的比较研究
Eur J Orthod. 2024 Apr 13. doi: 10.1093/ejo/cjae017.
5
Folic acid supplementation and malaria susceptibility and severity among people taking antifolate antimalarial drugs in endemic areas.在流行地区,服用抗叶酸抗疟药物的人群中,叶酸补充剂与疟疾易感性和严重程度的关系。
Cochrane Database Syst Rev. 2022 Feb 1;2(2022):CD014217. doi: 10.1002/14651858.CD014217.
6
Exploring the Potential of ChatGPT-4 in Predicting Refractive Surgery Categorizations: Comparative Study.探索ChatGPT-4在预测屈光手术分类中的潜力:比较研究
JMIR Form Res. 2023 Dec 28;7:e51798. doi: 10.2196/51798.
7
Evaluating ChatGPT-4's Diagnostic Accuracy: Impact of Visual Data Integration.评估ChatGPT-4的诊断准确性:视觉数据整合的影响。
JMIR Med Inform. 2024 Apr 9;12:e55627. doi: 10.2196/55627.
8
Beyond the Hype-The Actual Role and Risks of AI in Today's Medical Practice: Comparative-Approach Study.超越炒作——人工智能在当今医学实践中的实际作用和风险:比较研究方法
JMIR AI. 2024 Jan 22;3:e49082. doi: 10.2196/49082.
9
A comparative vignette study: Evaluating the potential role of a generative AI model in enhancing clinical decision-making in nursing.一项比较性案例研究:评估生成式人工智能模型在增强护理临床决策中的潜在作用。
J Adv Nurs. 2024 Feb 17. doi: 10.1111/jan.16101.
10
The Use of Generative AI for Scientific Literature Searches for Systematic Reviews: ChatGPT and Microsoft Bing AI Performance Evaluation.生成式人工智能用于系统评价的科学文献检索:ChatGPT和微软必应人工智能性能评估
JMIR Med Inform. 2024 May 14;12:e51187. doi: 10.2196/51187.

本文引用的文献

1
Assessing the research landscape and clinical utility of large language models: a scoping review.评估大型语言模型的研究现状和临床实用性:范围综述。
BMC Med Inform Decis Mak. 2024 Mar 12;24(1):72. doi: 10.1186/s12911-024-02459-6.
2
Adapted large language models can outperform medical experts in clinical text summarization.经过改编的大型语言模型在临床文本总结方面的表现优于医学专家。
Nat Med. 2024 Apr;30(4):1134-1142. doi: 10.1038/s41591-024-02855-5. Epub 2024 Feb 27.
3
Text generative artificial intelligence tools for clinical applications: scope and concerns.用于临床应用的文本生成式人工智能工具:范围与问题
Ir J Med Sci. 2024 Apr;193(2):1123-1124. doi: 10.1007/s11845-023-03537-w. Epub 2023 Sep 30.
4
Evaluating large language models on medical evidence summarization.基于医学证据总结对大语言模型进行评估。
NPJ Digit Med. 2023 Aug 24;6(1):158. doi: 10.1038/s41746-023-00896-7.
5
Generative artificial intelligence: Can ChatGPT write a quality abstract?生成式人工智能:ChatGPT 能写出高质量的摘要吗?
Emerg Med Australas. 2023 Oct;35(5):809-811. doi: 10.1111/1742-6723.14233. Epub 2023 May 4.
6
Are ChatGPT and large language models "the answer" to bringing us closer to systematic review automation?ChatGPT 和大型语言模型是实现系统评价自动化的“答案”吗?
Syst Rev. 2023 Apr 29;12(1):72. doi: 10.1186/s13643-023-02243-z.
7
Can artificial intelligence help for scientific writing?人工智能能帮助进行科学写作吗?
Crit Care. 2023 Feb 25;27(1):75. doi: 10.1186/s13054-023-04380-2.
8
Artificial Hallucinations in ChatGPT: Implications in Scientific Writing.ChatGPT中的人工幻觉:对科学写作的影响
Cureus. 2023 Feb 19;15(2):e35179. doi: 10.7759/cureus.35179. eCollection 2023 Feb.
9
AI-Based Modeling: Techniques, Applications and Research Issues Towards Automation, Intelligent and Smart Systems.基于人工智能的建模:面向自动化、智能和智能系统的技术、应用及研究问题
SN Comput Sci. 2022;3(2):158. doi: 10.1007/s42979-022-01043-x. Epub 2022 Feb 10.
10
Use of Advanced Artificial Intelligence in Forensic Medicine, Forensic Anthropology and Clinical Anatomy.先进人工智能在法医学、法医人类学和临床解剖学中的应用。
Healthcare (Basel). 2021 Nov 12;9(11):1545. doi: 10.3390/healthcare9111545.