• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

生成式人工智能在下颌骨骨折患者护理咨询中的应用

Utility of Generative Artificial Intelligence for Patient Care Counseling for Mandibular Fractures.

作者信息

Shaari Ariana L, Patil Disha P, Mohammed Saad, Salehi Parsa P

机构信息

Department of Head and Neck Surgery, Rutgers New Jersey Medical School.

Rutgers School of Dental Medicine, Newark, NJ.

出版信息

J Craniofac Surg. 2024 Nov 4. doi: 10.1097/SCS.0000000000010832.

DOI:10.1097/SCS.0000000000010832
PMID:39495556
Abstract

OBJECTIVE

To determine the readability and accuracy of information regarding mandible fractures generated by Chat Generative Pre-trained Transformer (ChatGPT) versions 3.5 and 4o.

BACKGROUND

Patients are increasingly turning to generative artificial intelligence to answer medical queries. To date, the accuracy and readability of responses regarding mandible fractures have not been assessed.

METHODS

Twenty patient questions regarding mandible fractures were developed by querying AlsoAsked (https://alsoasked.com), SearchResponse (https://searchresponse.io), and Answer the Public (https://answerthepublic.com/). Questions were posed to ChatGPT 3.5 and 4o. Readability was assessed by calculating the Flesch Kincaid Reading Ease, Flesch Kincaid Grade Level, number of sentences, and percentage of complex words. Accuracy was assessed by a board-certified facial plastic and reconstructive otolaryngologist using a 5-point Likert Scale.

RESULTS

No significant differences were observed between the two versions for readability or accuracy. Readability was above recommended levels for patient education materials. Accuracy was low, and a majority of responses were deemed inappropriate for patient use with multiple inaccuracies and/or missing information.

CONCLUSION

ChatGPT produced responses written at a high level inappropriate for the average patient, in addition to containing several inaccurate statements. Patients and clinicians should be aware of the limitations of generative artificial intelligence when seeking medical information regarding mandible fractures.

摘要

目的

确定由Chat Generative Pre-trained Transformer(ChatGPT)3.5版和4o版生成的有关下颌骨骨折信息的可读性和准确性。

背景

患者越来越多地借助生成式人工智能来解答医学问题。迄今为止,尚未评估有关下颌骨骨折的回答的准确性和可读性。

方法

通过查询AlsoAsked(https://alsoasked.com)、SearchResponse(https://searchresponse.io)和Answer the Public(https://answerthepublic.com/),提出了20个有关下颌骨骨折的患者问题。这些问题被提交给ChatGPT 3.5和4o。通过计算弗莱什-金凯德易读性、弗莱什-金凯德年级水平、句子数量和复杂单词百分比来评估可读性。准确性由一位获得委员会认证的面部整形和重建耳鼻喉科医生使用5点李克特量表进行评估。

结果

在可读性或准确性方面,两个版本之间未观察到显著差异。可读性高于患者教育材料的推荐水平。准确性较低,大多数回答因存在多个不准确和/或缺失信息而被认为不适合患者使用。

结论

ChatGPT生成的回答水平较高,普通患者难以理解,此外还包含一些不准确的陈述。在寻求有关下颌骨骨折的医学信息时,患者和临床医生应意识到生成式人工智能的局限性。

相似文献

1
Utility of Generative Artificial Intelligence for Patient Care Counseling for Mandibular Fractures.生成式人工智能在下颌骨骨折患者护理咨询中的应用
J Craniofac Surg. 2024 Nov 4. doi: 10.1097/SCS.0000000000010832.
2
Assessing ChatGPT's Educational Potential in Lung Cancer Radiotherapy From Clinician and Patient Perspectives: Content Quality and Readability Analysis.从临床医生和患者角度评估ChatGPT在肺癌放疗中的教育潜力:内容质量与可读性分析
JMIR Cancer. 2025 Aug 13;11:e69783. doi: 10.2196/69783.
3
Using Artificial Intelligence ChatGPT to Access Medical Information About Chemical Eye Injuries: Comparative Study.使用人工智能ChatGPT获取有关化学性眼外伤的医学信息:比较研究
JMIR Form Res. 2025 Aug 13;9:e73642. doi: 10.2196/73642.
4
Evaluating ChatGPT's Utility in Biologic Therapy for Systemic Lupus Erythematosus: Comparative Study of ChatGPT and Google Web Search.评估ChatGPT在系统性红斑狼疮生物治疗中的效用:ChatGPT与谷歌网络搜索的比较研究
JMIR Form Res. 2025 Aug 28;9:e76458. doi: 10.2196/76458.
5
Assessing the response quality and readability of ChatGPT in stuttering.评估ChatGPT在口吃方面的回答质量和可读性。
J Fluency Disord. 2025 Sep;85:106149. doi: 10.1016/j.jfludis.2025.106149. Epub 2025 Aug 15.
6
Accuracy and Readability of ChatGPT Responses to Patient-Centric Strabismus Questions.ChatGPT对以患者为中心的斜视问题的回答的准确性和可读性。
J Pediatr Ophthalmol Strabismus. 2025 May-Jun;62(3):220-227. doi: 10.3928/01913913-20250110-02. Epub 2025 Feb 19.
7
Can Artificial Intelligence Improve the Readability of Patient Education Materials?人工智能能否提高患者教育材料的可读性?
Clin Orthop Relat Res. 2023 Nov 1;481(11):2260-2267. doi: 10.1097/CORR.0000000000002668. Epub 2023 Apr 28.
8
Evaluating if ChatGPT Can Answer Common Patient Questions Compared to OrthoInfo Regarding Lateral Epicondylitis.评估与OrthoInfo相比,ChatGPT能否回答有关外侧上髁炎的常见患者问题。
Iowa Orthop J. 2025;45(1):19-32.
9
Comparing physician and artificial intelligence chatbot responses to posthysterectomy questions posted to a public social media forum.比较医生和人工智能聊天机器人对发布在公共社交媒体论坛上的子宫切除术后问题的回答。
AJOG Glob Rep. 2025 Aug 5;5(3):100553. doi: 10.1016/j.xagr.2025.100553. eCollection 2025 Aug.
10
Can ChatGPT provide parent education for oral immunotherapy?ChatGPT能为口服免疫疗法提供家长教育吗?
Ann Allergy Asthma Immunol. 2025 Jul;135(1):87-90. doi: 10.1016/j.anai.2025.04.011. Epub 2025 Apr 24.

引用本文的文献

1
Using large language models to generate child-friendly education materials on myopia.使用大语言模型生成适合儿童的近视教育材料。
Digit Health. 2025 Jul 30;11:20552076251362338. doi: 10.1177/20552076251362338. eCollection 2025 Jan-Dec.