• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

提高在线面向患者内容的可读性:人工智能聊天机器人在改善癌症信息可及性方面的作用。

Enhancing Readability of Online Patient-Facing Content: The Role of AI Chatbots in Improving Cancer Information Accessibility.

机构信息

1Division of Surgical Oncology, Department of Surgery, UT Southwestern Medical Center, Dallas, TX.

2Department of Surgery, Yale School of Medicine, New Haven, CT.

出版信息

J Natl Compr Canc Netw. 2024 May 15;22(2 D):e237334. doi: 10.6004/jnccn.2023.7334.

DOI:10.6004/jnccn.2023.7334
PMID:38749478
Abstract

BACKGROUND

Internet-based health education is increasingly vital in patient care. However, the readability of online information often exceeds the average reading level of the US population, limiting accessibility and comprehension. This study investigates the use of chatbot artificial intelligence to improve the readability of cancer-related patient-facing content.

METHODS

We used ChatGPT 4.0 to rewrite content about breast, colon, lung, prostate, and pancreas cancer across 34 websites associated with NCCN Member Institutions. Readability was analyzed using Fry Readability Score, Flesch-Kincaid Grade Level, Gunning Fog Index, and Simple Measure of Gobbledygook. The primary outcome was the mean readability score for the original and artificial intelligence (AI)-generated content. As secondary outcomes, we assessed the accuracy, similarity, and quality using F1 scores, cosine similarity scores, and section 2 of the DISCERN instrument, respectively.

RESULTS

The mean readability level across the 34 websites was equivalent to a university freshman level (grade 13±1.5). However, after ChatGPT's intervention, the AI-generated outputs had a mean readability score equivalent to a high school freshman education level (grade 9±0.8). The overall F1 score for the rewritten content was 0.87, the precision score was 0.934, and the recall score was 0.814. Compared with their original counterparts, the AI-rewritten content had a cosine similarity score of 0.915 (95% CI, 0.908-0.922). The improved readability was attributed to simpler words and shorter sentences. The mean DISCERN score of the random sample of AI-generated content was equivalent to "good" (28.5±5), with no significant differences compared with their original counterparts.

CONCLUSIONS

Our study demonstrates the potential of AI chatbots to improve the readability of patient-facing content while maintaining content quality. The decrease in requisite literacy after AI revision emphasizes the potential of this technology to reduce health care disparities caused by a mismatch between educational resources available to a patient and their health literacy.

摘要

背景

基于互联网的健康教育在患者护理中变得越来越重要。然而,在线信息的可读性往往超过了美国人口的平均阅读水平,限制了其可及性和理解度。本研究调查了使用聊天机器人人工智能来提高面向癌症患者的内容可读性。

方法

我们使用 ChatGPT 4.0 改写了与 NCCN 成员机构相关的 34 个网站上关于乳腺癌、结肠癌、肺癌、前列腺癌和胰腺癌的内容。使用 Fry 可读性得分、Flesch-Kincaid 年级水平、Gunning Fog 指数和简单测词法来分析可读性。主要结果是原始内容和人工智能(AI)生成内容的平均可读性得分。作为次要结果,我们分别使用 F1 得分、余弦相似度得分和 DISCERN 工具第 2 部分评估准确性、相似性和质量。

结果

34 个网站的平均阅读水平相当于大学新生水平(13 年级±1.5)。然而,在 ChatGPT 的干预下,AI 生成的输出的平均可读性得分相当于高中新生的教育水平(9 年级±0.8)。重写内容的总体 F1 得分为 0.87,精度得分为 0.934,召回率得分为 0.814。与原始内容相比,AI 重写内容的余弦相似度得分为 0.915(95%置信区间,0.908-0.922)。可读性的提高归因于更简单的单词和更短的句子。随机抽取的 AI 生成内容的平均 DISCERN 得分为“良好”(28.5±5),与原始内容相比无显著差异。

结论

我们的研究表明,人工智能聊天机器人有可能在保持内容质量的同时提高面向患者的内容的可读性。在 AI 修订后所需的读写能力下降强调了这项技术的潜力,可以减少由于患者可获得的教育资源与其健康素养不匹配而导致的医疗保健差距。

相似文献

1
Enhancing Readability of Online Patient-Facing Content: The Role of AI Chatbots in Improving Cancer Information Accessibility.提高在线面向患者内容的可读性:人工智能聊天机器人在改善癌症信息可及性方面的作用。
J Natl Compr Canc Netw. 2024 May 15;22(2 D):e237334. doi: 10.6004/jnccn.2023.7334.
2
Evaluating the Efficacy of ChatGPT as a Patient Education Tool in Prostate Cancer: Multimetric Assessment.评估 ChatGPT 在前列腺癌患者教育中的疗效:多指标评估。
J Med Internet Res. 2024 Aug 14;26:e55939. doi: 10.2196/55939.
3
Assessment of online patient education materials from major ophthalmologic associations.主要眼科协会在线患者教育材料评估。
JAMA Ophthalmol. 2015 Apr;133(4):449-54. doi: 10.1001/jamaophthalmol.2014.6104.
4
Dr. Google to Dr. ChatGPT: assessing the content and quality of artificial intelligence-generated medical information on appendicitis.谷歌博士对 ChatGPT 博士:评估人工智能生成的关于阑尾炎的医学信息的内容和质量。
Surg Endosc. 2024 May;38(5):2887-2893. doi: 10.1007/s00464-024-10739-5. Epub 2024 Mar 5.
5
The quality, understandability, readability, and popularity of online educational materials for heart murmur.心脏杂音在线教育资料的质量、易懂性、可理解性和普及性。
Cardiol Young. 2020 Mar;30(3):328-336. doi: 10.1017/S104795111900307X. Epub 2019 Dec 26.
6
Appropriateness and readability of Google Bard and ChatGPT-3.5 generated responses for surgical treatment of glaucoma.谷歌巴德和 ChatGPT-3.5 生成的青光眼手术治疗回复的适宜性和可读性。
Rom J Ophthalmol. 2024 Jul-Sep;68(3):243-248. doi: 10.22336/rjo.2024.45.
7
Can Artificial Intelligence Improve the Readability of Patient Education Materials?人工智能能否提高患者教育材料的可读性?
Clin Orthop Relat Res. 2023 Nov 1;481(11):2260-2267. doi: 10.1097/CORR.0000000000002668. Epub 2023 Apr 28.
8
Assessing the readability, reliability, and quality of artificial intelligence chatbot responses to the 100 most searched queries about cardiopulmonary resuscitation: An observational study.评估人工智能聊天机器人对心肺复苏术 100 个最常见查询的回答的易读性、可靠性和质量:一项观察性研究。
Medicine (Baltimore). 2024 May 31;103(22):e38352. doi: 10.1097/MD.0000000000038352.
9
Optimizing Ophthalmology Patient Education via ChatBot-Generated Materials: Readability Analysis of AI-Generated Patient Education Materials and The American Society of Ophthalmic Plastic and Reconstructive Surgery Patient Brochures.通过聊天机器人生成的材料优化眼科患者教育:人工智能生成的患者教育材料和美国眼科整形重建外科学会患者手册的可读性分析。
Ophthalmic Plast Reconstr Surg. 2024;40(2):212-216. doi: 10.1097/IOP.0000000000002549. Epub 2023 Nov 16.
10
Assessment of Artificial Intelligence Chatbot Responses to Top Searched Queries About Cancer.评估人工智能聊天机器人对癌症热门搜索查询的响应
JAMA Oncol. 2023 Oct 1;9(10):1437-1440. doi: 10.1001/jamaoncol.2023.2947.

引用本文的文献

1
Utilisation of AI-driven chatbots for perioperative health information seeking: a descriptive qualitative study of orthopaedic patients and family members.利用人工智能驱动的聊天机器人获取围手术期健康信息:一项针对骨科患者及其家属的描述性定性研究
BMJ Open. 2025 Sep 4;15(9):e099824. doi: 10.1136/bmjopen-2025-099824.
2
Transforming Cancer Care: A Narrative Review on Leveraging Artificial Intelligence to Advance Immunotherapy in Underserved Communities.变革癌症护理:关于利用人工智能推进服务不足社区免疫治疗的叙述性综述。
J Clin Med. 2025 Jul 29;14(15):5346. doi: 10.3390/jcm14155346.
3
Assessing ChatGPT's Educational Potential in Lung Cancer Radiotherapy From Clinician and Patient Perspectives: Content Quality and Readability Analysis.
从临床医生和患者角度评估ChatGPT在肺癌放疗中的教育潜力:内容质量与可读性分析
JMIR Cancer. 2025 Aug 13;11:e69783. doi: 10.2196/69783.
4
Qualitative study on the characteristics and dilemmas of eHealth literacy among family caregivers of breast cancer patients.乳腺癌患者家庭照顾者的电子健康素养特征与困境的质性研究
Digit Health. 2025 May 26;11:20552076251346240. doi: 10.1177/20552076251346240. eCollection 2025 Jan-Dec.
5
Artificial intelligence as a tool for improving health literacy in kidney care.人工智能作为提高肾脏护理健康素养的工具。
PLOS Digit Health. 2025 Feb 21;4(2):e0000746. doi: 10.1371/journal.pdig.0000746. eCollection 2025 Feb.
6
Enhancing Patient Comprehension of Glomerular Disease Treatments Using ChatGPT.使用ChatGPT提高患者对肾小球疾病治疗的理解
Healthcare (Basel). 2024 Dec 31;13(1):57. doi: 10.3390/healthcare13010057.
7
Evaluating the quality and readability of ChatGPT-generated patient-facing medical information in rhinology.评估ChatGPT生成的面向患者的鼻科学医学信息的质量和可读性。
Eur Arch Otorhinolaryngol. 2025 Apr;282(4):1911-1920. doi: 10.1007/s00405-024-09180-0. Epub 2024 Dec 26.
8
Evaluating Quality and Readability of AI-generated Information on Living Kidney Donation.评估关于活体肾捐赠的人工智能生成信息的质量和可读性。
Transplant Direct. 2024 Dec 10;11(1):e1740. doi: 10.1097/TXD.0000000000001740. eCollection 2025 Jan.
9
Large language models in patient education: a scoping review of applications in medicine.用于患者教育的大语言模型:医学应用的范围综述
Front Med (Lausanne). 2024 Oct 29;11:1477898. doi: 10.3389/fmed.2024.1477898. eCollection 2024.
10
Development and Validation of an Artificial Intelligence-Assisted Patient Education Material for Ostomy Patients: A Methodological Study.造口患者人工智能辅助患者教育材料的开发与验证:一项方法学研究。
J Adv Nurs. 2025 Jul;81(7):3859-3867. doi: 10.1111/jan.16542. Epub 2024 Oct 18.