• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

评估在线患者信息的质量和可读性:英国耳鼻喉科患者信息电子传单与生成式人工智能的回应

Assessing the Quality and Readability of Online Patient Information: ENT UK Patient Information e-Leaflets versus Responses by a Generative Artificial Intelligence.

作者信息

Shamil Eamon, Ko Tsz Ki, Fan Ka Siu, Schuster-Bruce James, Jaafar Mustafa, Khwaja Sadie, Eynon-Lewis Nicholas, D'Souza Alwyn, Andrews Peter

机构信息

The Royal National ENT Hospital, University College London Hospitals NHS Foundation Trust, London, England, United Kingdom.

Royal Stoke University Hospital, United Kingdom.

出版信息

Facial Plast Surg. 2024 Oct 15. doi: 10.1055/a-2413-3675.

DOI:10.1055/a-2413-3675
PMID:39260421
Abstract

BACKGROUND

The evolution of artificial intelligence has introduced new ways to disseminate health information, including natural language processing models like ChatGPT. However, the quality and readability of such digitally generated information remains understudied. This study is the first to compare the quality and readability of digitally generated health information against leaflets produced by professionals.

METHODOLOGY

Patient information leaflets from five ENT UK leaflets and their corresponding ChatGPT responses were extracted from the Internet. Assessors with various degrees of medical knowledge evaluated the content using the Ensuring Quality Information for Patients (EQIP) tool and readability tools including the Flesch-Kincaid Grade Level (FKGL). Statistical analysis was performed to identify differences between leaflets, assessors, and sources of information.

RESULTS

ENT UK leaflets were of moderate quality, scoring a median EQIP of 23. Statistically significant differences in overall EQIP score were identified between ENT UK leaflets, but ChatGPT responses were of uniform quality. Nonspecialist doctors rated the highest EQIP scores, while medical students scored the lowest. The mean readability of ENT UK leaflets was higher than ChatGPT responses. The information metrics of ENT UK leaflets were moderate and varied between topics. Equivalent ChatGPT information provided comparable content quality, but with reduced readability.

CONCLUSION

ChatGPT patient information and professionally produced leaflets had comparable content, but large language model content required a higher reading age. With the increasing use of online health resources, this study highlights the need for a balanced approach that considers both the quality and readability of patient education materials.

摘要

背景

人工智能的发展引入了传播健康信息的新方式,包括像ChatGPT这样的自然语言处理模型。然而,此类数字生成信息的质量和可读性仍未得到充分研究。本研究首次将数字生成的健康信息的质量和可读性与专业人员制作的宣传册进行比较。

方法

从互联网上提取了英国耳鼻喉科协会(ENT UK)的五份患者信息宣传册及其相应的ChatGPT回复。具有不同医学知识水平的评估人员使用患者质量信息保障(EQIP)工具和包括弗莱什-金凯德年级水平(FKGL)在内的可读性工具对内容进行评估。进行统计分析以确定宣传册、评估人员和信息来源之间的差异。

结果

英国耳鼻喉科协会的宣传册质量中等,EQIP中位数得分为23。在英国耳鼻喉科协会的宣传册之间,总体EQIP得分存在统计学上的显著差异,但ChatGPT的回复质量一致。非专科医生给出的EQIP得分最高,而医学生得分最低。英国耳鼻喉科协会宣传册的平均可读性高于ChatGPT的回复。英国耳鼻喉科协会宣传册的信息指标中等,且因主题而异。等效的ChatGPT信息提供了可比的内容质量,但可读性降低。

结论

ChatGPT患者信息和专业制作的宣传册内容相当,但大语言模型生成的内容需要更高的阅读年龄。随着在线健康资源使用的增加,本研究强调需要一种平衡的方法,同时考虑患者教育材料的质量和可读性。

相似文献

1
Assessing the Quality and Readability of Online Patient Information: ENT UK Patient Information e-Leaflets versus Responses by a Generative Artificial Intelligence.评估在线患者信息的质量和可读性:英国耳鼻喉科患者信息电子传单与生成式人工智能的回应
Facial Plast Surg. 2024 Oct 15. doi: 10.1055/a-2413-3675.
2
Is Information About Musculoskeletal Malignancies From Large Language Models or Web Resources at a Suitable Reading Level for Patients?来自大语言模型或网络资源的关于肌肉骨骼恶性肿瘤的信息对患者来说是否处于合适的阅读水平?
Clin Orthop Relat Res. 2025 Feb 1;483(2):306-315. doi: 10.1097/CORR.0000000000003263. Epub 2024 Sep 25.
3
American Academy of Orthopaedic Surgeons OrthoInfo provides more readable information regarding rotator cuff injury than ChatGPT.美国矫形外科医师学会的OrthoInfo提供了比ChatGPT更具可读性的关于肩袖损伤的信息。
J ISAKOS. 2025 Feb 12;12:100841. doi: 10.1016/j.jisako.2025.100841.
4
Can Artificial Intelligence Improve the Readability of Patient Education Materials?人工智能能否提高患者教育材料的可读性?
Clin Orthop Relat Res. 2023 Nov 1;481(11):2260-2267. doi: 10.1097/CORR.0000000000002668. Epub 2023 Apr 28.
5
Assessing chatbots ability to produce leaflets on cataract surgery: Bing AI, chatGPT 3.5, chatGPT 4o, ChatSonic, Google Bard, Perplexity, and Pi.评估聊天机器人生成白内障手术宣传册的能力:必应人工智能、ChatGPT 3.5、ChatGPT 4、ChatSonic、谷歌巴德、Perplexity和Pi。
J Cataract Refract Surg. 2025 May 1;51(5):371-375. doi: 10.1097/j.jcrs.0000000000001622.
6
Readability of AI-Generated Patient Information Leaflets on Alzheimer's, Vascular Dementia, and Delirium.关于阿尔茨海默病、血管性痴呆和谵妄的人工智能生成的患者信息手册的可读性。
Cureus. 2025 Jun 6;17(6):e85463. doi: 10.7759/cureus.85463. eCollection 2025 Jun.
7
Evaluation of Information Provided by ChatGPT Versions on Traumatic Dental Injuries for Dental Students and Professionals.评估ChatGPT不同版本为牙科学生和专业人员提供的有关创伤性牙损伤的信息。
Dent Traumatol. 2025 Aug;41(4):427-436. doi: 10.1111/edt.13042. Epub 2025 Jan 23.
8
Readability of patient education materials in ophthalmology: a single-institution study and systematic review.眼科患者教育材料的可读性:一项单机构研究及系统评价
BMC Ophthalmol. 2016 Aug 3;16:133. doi: 10.1186/s12886-016-0315-0.
9
Enhancing the Readability of Online Patient Education Materials Using Large Language Models: Cross-Sectional Study.使用大语言模型提高在线患者教育材料的可读性:横断面研究。
J Med Internet Res. 2025 Jun 4;27:e69955. doi: 10.2196/69955.
10
Evaluating the readability, quality, and reliability of responses generated by ChatGPT, Gemini, and Perplexity on the most commonly asked questions about Ankylosing spondylitis.评估ChatGPT、Gemini和Perplexity针对强直性脊柱炎最常见问题生成的回答的可读性、质量和可靠性。
PLoS One. 2025 Jun 18;20(6):e0326351. doi: 10.1371/journal.pone.0326351. eCollection 2025.

引用本文的文献

1
Readability of AI-Generated Patient Information Leaflets on Alzheimer's, Vascular Dementia, and Delirium.关于阿尔茨海默病、血管性痴呆和谵妄的人工智能生成的患者信息手册的可读性。
Cureus. 2025 Jun 6;17(6):e85463. doi: 10.7759/cureus.85463. eCollection 2025 Jun.
2
Exploring the Utility of ChatGPT in Cleft Lip Repair Education.探索ChatGPT在唇裂修复教育中的效用。
J Clin Med. 2025 Feb 4;14(3):993. doi: 10.3390/jcm14030993.