• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

利用 ChatGPT 和 Google Bard 提高书面患者信息的可读性:概念验证。

Using ChatGPT and Google Bard to improve the readability of written patient information: a proof of concept.

机构信息

KU Leuven Department of Public Health and Primary Care, KU Leuven-University of Leuven, Kapucijnenvoer 35 PB7001, 3000 Leuven, Belgium.

Institute of Health and Care Sciences, University of Gothenburg, Arvid Wallgrens backe 1, 413 46 Gothenburg, Sweden.

出版信息

Eur J Cardiovasc Nurs. 2024 Mar 12;23(2):122-126. doi: 10.1093/eurjcn/zvad087.

DOI:10.1093/eurjcn/zvad087
PMID:37603843
Abstract

Patient information materials often tend to be written at a reading level that is too advanced for patients. In this proof-of-concept study, we used ChatGPT and Google Bard to reduce the reading level of three selected patient information sections from scientific journals. ChatGPT successfully improved readability. However, it could not achieve the recommended 6th-grade reading level. Bard reached the reading level of 6th graders but oversimplified the texts by omitting up to 83% of the content. Despite the present limitations, developers of patient information are encouraged to employ large language models, preferably ChatGPT, to optimize their materials.

摘要

患者信息材料往往倾向于使用对患者来说过于高级的阅读水平。在这项概念验证研究中,我们使用 ChatGPT 和 Google Bard 将三种选自科学期刊的患者信息部分的阅读水平降低。ChatGPT 成功地提高了可读性。然而,它无法达到推荐的 6 年级阅读水平。Bard 达到了 6 年级的阅读水平,但通过省略多达 83%的内容使文本过于简单。尽管存在目前的限制,患者信息的开发者仍被鼓励使用大型语言模型,最好是 ChatGPT,来优化他们的材料。

相似文献

1
Using ChatGPT and Google Bard to improve the readability of written patient information: a proof of concept.利用 ChatGPT 和 Google Bard 提高书面患者信息的可读性:概念验证。
Eur J Cardiovasc Nurs. 2024 Mar 12;23(2):122-126. doi: 10.1093/eurjcn/zvad087.
2
Assessment of readability, reliability, and quality of ChatGPT®, BARD®, Gemini®, Copilot®, Perplexity® responses on palliative care.评估 ChatGPT®、BARD®、 Gemini®、Copilot®、Perplexity® 在姑息治疗方面的可读性、可靠性和质量。
Medicine (Baltimore). 2024 Aug 16;103(33):e39305. doi: 10.1097/MD.0000000000039305.
3
The Use of Large Language Models to Generate Education Materials about Uveitis.使用大型语言模型生成有关葡萄膜炎的教育材料。
Ophthalmol Retina. 2024 Feb;8(2):195-201. doi: 10.1016/j.oret.2023.09.008. Epub 2023 Sep 15.
4
Dr. Google vs. Dr. ChatGPT: Exploring the Use of Artificial Intelligence in Ophthalmology by Comparing the Accuracy, Safety, and Readability of Responses to Frequently Asked Patient Questions Regarding Cataracts and Cataract Surgery.谷歌医生与ChatGPT医生:通过比较关于白内障及白内障手术的常见患者问题的回答的准确性、安全性和可读性,探索人工智能在眼科领域的应用。
Semin Ophthalmol. 2024 Aug;39(6):472-479. doi: 10.1080/08820538.2024.2326058. Epub 2024 Mar 22.
5
How artificial intelligence can provide information about subdural hematoma: Assessment of readability, reliability, and quality of ChatGPT, BARD, and perplexity responses.人工智能如何提供关于硬膜下血肿的信息:对ChatGPT、BARD和Perplexity回答的可读性、可靠性和质量评估。
Medicine (Baltimore). 2024 May 3;103(18):e38009. doi: 10.1097/MD.0000000000038009.
6
Artificial intelligence chatbots as sources of patient education material for obstructive sleep apnoea: ChatGPT versus Google Bard.人工智能聊天机器人作为阻塞性睡眠呼吸暂停患者教育材料的来源:ChatGPT 与 Google Bard 对比。
Eur Arch Otorhinolaryngol. 2024 Feb;281(2):985-993. doi: 10.1007/s00405-023-08319-9. Epub 2023 Nov 2.
7
Assessing the accuracy, usefulness, and readability of artificial-intelligence-generated responses to common dermatologic surgery questions for patient education: A double-blinded comparative study of ChatGPT and Google Bard.评估人工智能生成的针对常见皮肤科手术问题的患者教育回复的准确性、实用性和可读性:ChatGPT与谷歌巴德的双盲比较研究
J Am Acad Dermatol. 2024 May;90(5):1078-1080. doi: 10.1016/j.jaad.2024.01.037. Epub 2024 Feb 1.
8
Can Artificial Intelligence Improve the Readability of Patient Education Materials on Aortic Stenosis? A Pilot Study.人工智能能否提高主动脉瓣狭窄患者教育材料的可读性?一项试点研究。
Cardiol Ther. 2024 Mar;13(1):137-147. doi: 10.1007/s40119-023-00347-0. Epub 2024 Jan 9.
9
How AI Responds to Common Lung Cancer Questions: ChatGPT vs Google Bard.人工智能如何回答常见肺癌问题:ChatGPT 与 Google Bard 对比。
Radiology. 2023 Jun;307(5):e230922. doi: 10.1148/radiol.230922.
10
Performance of large language models (LLMs) in providing prostate cancer information.大语言模型(LLMs)在提供前列腺癌信息方面的表现。
BMC Urol. 2024 Aug 23;24(1):177. doi: 10.1186/s12894-024-01570-0.

引用本文的文献

1
Evaluating ChatGPT's Utility in Biologic Therapy for Systemic Lupus Erythematosus: Comparative Study of ChatGPT and Google Web Search.评估ChatGPT在系统性红斑狼疮生物治疗中的效用:ChatGPT与谷歌网络搜索的比较研究
JMIR Form Res. 2025 Aug 28;9:e76458. doi: 10.2196/76458.
2
Enhancing the Readability of Online Pediatric Cataract Education Materials: A Comparative Study of Large Language Models.提高在线儿科白内障教育材料的可读性:大语言模型的比较研究
Transl Vis Sci Technol. 2025 Aug 1;14(8):19. doi: 10.1167/tvst.14.8.19.
3
An assessment of the quality and readability level of online content on urinary tract infection treatment in Spanish and English.
对西班牙语和英语中关于尿路感染治疗的在线内容的质量和可读性水平的评估。
Transl Androl Urol. 2025 Jul 30;14(7):1959-1977. doi: 10.21037/tau-2025-221. Epub 2025 Jul 28.
4
Lost in Translation: Preoperative Orthopaedic Education Materials Significantly Exceed Recommended Reading Levels.翻译失误:术前骨科教育材料的阅读水平显著超过推荐标准。
JB JS Open Access. 2025 Aug 7;10(3). doi: 10.2106/JBJS.OA.25.00143. eCollection 2025 Jul-Sep.
5
Generative AI/LLMs for Plain Language Medical Information for Patients, Caregivers and General Public: Opportunities, Risks and Ethics.用于为患者、护理人员和普通公众提供通俗易懂的医学信息的生成式人工智能/大型语言模型:机遇、风险与伦理
Patient Prefer Adherence. 2025 Jul 31;19:2227-2249. doi: 10.2147/PPA.S527922. eCollection 2025.
6
The Emergence of Applied Artificial Intelligence in the Realm of Value Based Musculoskeletal Care.基于价值的肌肉骨骼护理领域中应用人工智能的出现。
Curr Rev Musculoskelet Med. 2025 Jun 14. doi: 10.1007/s12178-025-09982-7.
7
Women's Preferences and Willingness to Pay for AI Chatbots in Women's Health: Discrete Choice Experiment Study.女性健康领域中女性对人工智能聊天机器人的偏好及支付意愿:离散选择实验研究
J Med Internet Res. 2025 Jun 10;27:e67303. doi: 10.2196/67303.
8
Enhancing the Readability of Online Patient Education Materials Using Large Language Models: Cross-Sectional Study.使用大语言模型提高在线患者教育材料的可读性:横断面研究。
J Med Internet Res. 2025 Jun 4;27:e69955. doi: 10.2196/69955.
9
Assessing the Capability of Large Language Model Chatbots in Generating Plain Language Summaries.评估大语言模型聊天机器人生成通俗易懂摘要的能力。
Cureus. 2025 Mar 21;17(3):e80976. doi: 10.7759/cureus.80976. eCollection 2025 Mar.
10
Assessing the Quality and Reliability of ChatGPT's Responses to Radiotherapy-Related Patient Queries: Comparative Study With GPT-3.5 and GPT-4.评估ChatGPT对放疗相关患者问题回答的质量和可靠性:与GPT-3.5和GPT-4的比较研究
JMIR Cancer. 2025 Apr 16;11:e63677. doi: 10.2196/63677.