文献检索文档翻译深度研究
Suppr Zotero 插件Zotero 插件
邀请有礼套餐&价格历史记录

新学期,新优惠

限时优惠:9月1日-9月22日

30天高级会员仅需29元

1天体验卡首发特惠仅需5.99元

了解详情
不再提醒
插件&应用
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
高级版
套餐订阅购买积分包
AI 工具
文献检索文档翻译深度研究
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2025

Artificial intelligence solutions for temporomandibular joint disorders: Contributions and future potential of ChatGPT.

作者信息

Kula Betul, Kula Ahmet, Bagcier Fatih, Alyanak Bulent

机构信息

Department of Orthodontics, Istanbul Galata University, Istanbul, Türkiye.

Department of Prosthodontics, Uskudar University, Istanbul, Türkiye.

出版信息

Korean J Orthod. 2025 Mar 25;55(2):131-141. doi: 10.4041/kjod24.106. Epub 2024 Dec 11.


DOI:10.4041/kjod24.106
PMID:40104855
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11922634/
Abstract

OBJECTIVE: This study aimed to evaluate the reliability and usefulness of information generated by Chat Generative Pre-Trained Transformer (ChatGPT) on temporomandibular joint disorders (TMD). METHODS: We asked ChatGPT about the diseases specified in the TMD classification and scored the responses using Likert reliability and usefulness scales, the modified DISCERN (mDISCERN) scale, and the Global Quality Scale (GQS). RESULTS: The highest Likert scores for both reliability and usefulness were for masticatory muscle disorders (mean ± standard deviation [SD]: 6.0 ± 0), and the lowest scores were for inflammatory disorders of the temporomandibular joint (mean ± SD: 4.3 ± 0.6 for reliability, 4.0 ± 0 for usefulness). The median Likert reliability score indicates that the responses are highly reliable. The median Likert usefulness score was 5 (4-6), indicating that the responses were moderately useful. A comparative analysis was performed, and no statistically significant differences were found in any subject for either reliability or usefulness ( = 0.083-1.000). The median mDISCERN score was 4 (3-5) for the two raters. A statistically significant difference was observed in the mean mDISCERN scores between the two raters ( = 0.046). The GQS scores indicated a moderate to high quality (mean ± SD: 3.8 ± 0.8 for rater 1, 4.0 ± 0.5 for rater 2). No statistically significant correlation was found between mDISCERN and GQS scores (r = -0.006, = 0.980). CONCLUSIONS: Although ChatGPT-4 has significant potential, it can be used as an additional source of information regarding TMD for patients and clinicians.

摘要
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81b0/11922634/1a391bcaaa17/kjod-55-2-131-f1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81b0/11922634/1a391bcaaa17/kjod-55-2-131-f1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/81b0/11922634/1a391bcaaa17/kjod-55-2-131-f1.jpg

相似文献

[1]
Artificial intelligence solutions for temporomandibular joint disorders: Contributions and future potential of ChatGPT.

Korean J Orthod. 2025-3-25

[2]
Evaluación de la fiabilidad y legibilidad de las respuestas de los chatbots como recurso de información al paciente para las exploraciones PET-TC más communes.

Rev Esp Med Nucl Imagen Mol (Engl Ed). 2025

[3]
ChatGPT-4o's performance on pediatric Vesicoureteral reflux.

J Pediatr Urol. 2025-4

[4]
Evaluation of the reliability and readability of ChatGPT-4 responses regarding hypothyroidism during pregnancy.

Sci Rep. 2024-1-2

[5]
A Performance Evaluation of Large Language Models in Keratoconus: A Comparative Study of ChatGPT-3.5, ChatGPT-4.0, Gemini, Copilot, Chatsonic, and Perplexity.

J Clin Med. 2024-10-30

[6]
Can artificial intelligence models serve as patient information consultants in orthodontics?

BMC Med Inform Decis Mak. 2024-7-29

[7]
Reliability and Usefulness of ChatGPT for Inflammatory Bowel Diseases: An Analysis for Patients and Healthcare Professionals.

Cureus. 2023-10-9

[8]
Does the Information Quality of ChatGPT Meet the Requirements of Orthopedics and Trauma Surgery?

Cureus. 2024-5-15

[9]
Quality and reliability evaluation of YouTube® exercises content for temporomandibular disorders.

BMC Oral Health. 2025-2-25

[10]
Assessing the Quality and Reliability of ChatGPT's Responses to Radiotherapy-Related Patient Queries: Comparative Study With GPT-3.5 and GPT-4.

JMIR Cancer. 2025-4-16

本文引用的文献

[1]
Assessing the Quality and Reliability of AI-Generated Responses to Common Hypertension Queries.

Cureus. 2024-8-2

[2]
Can artificial intelligence models serve as patient information consultants in orthodontics?

BMC Med Inform Decis Mak. 2024-7-29

[3]
Artificial intelligence in dentistry: A bibliometric analysis from 2000 to 2023.

J Dent Sci. 2024-7

[4]
Hallucination Rates and Reference Accuracy of ChatGPT and Bard for Systematic Reviews: Comparative Analysis.

J Med Internet Res. 2024-5-22

[5]
Accuracy and Completeness of ChatGPT-Generated Information on Interceptive Orthodontics: A Multicenter Collaborative Study.

J Clin Med. 2024-1-27

[6]
Examination of the reliability and readability of Chatbot Generative Pretrained Transformer's (ChatGPT) responses to questions about orthodontics and the evolution of these responses in an updated version.

Am J Orthod Dentofacial Orthop. 2024-5

[7]
Content analysis of AI-generated (ChatGPT) responses concerning orthodontic clear aligners.

Angle Orthod. 2024-5-1

[8]
Evaluation of the reliability and readability of ChatGPT-4 responses regarding hypothyroidism during pregnancy.

Sci Rep. 2024-1-2

[9]
ChatGPT hallucinating: can it get any more humanlike?

Eur Heart J. 2024-2-1

[10]
Can natural language processing serve as a consultant in oral surgery?

J Stomatol Oral Maxillofac Surg. 2024-6

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

推荐工具

医学文档翻译智能文献检索