• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

关于脑肿瘤预后的在线及由ChatGPT生成的患者教育材料未达到可读性标准。

Online and ChatGPT-generated patient education materials regarding brain tumor prognosis fail to meet readability standards.

作者信息

Shukla Ishav Y, Sun Matthew Z

机构信息

Department of Neurological Surgery, University of Texas Southwestern Medical Center, Dallas, TX, USA.

Department of Neurological Surgery, University of Texas Southwestern Medical Center, Dallas, TX, USA.

出版信息

J Clin Neurosci. 2025 Aug;138:111410. doi: 10.1016/j.jocn.2025.111410. Epub 2025 Jun 20.

DOI:10.1016/j.jocn.2025.111410
PMID:40543265
Abstract

OBJECTIVE

Online healthcare literature often exceeds the general population's literacy level. Our study assesses the readability of online and ChatGPT-generated materials on glioblastomas, meningiomas, and pituitary adenomas, comparing readability by tumor type, institutional affiliation, authorship, and source (websites vs. ChatGPT).

METHODS

This cross-sectional study involved a Google Chrome search (November 2024) using 'prognosis of [tumor type],' with the first 100 English-language, patient-directed results per tumor included. Websites were categorized by tumor, institutional affiliation (university vs. non-affiliated), and authorship (medical-professional reviewed vs. non-reviewed). ChatGPT 4.0 was queried with three standardized questions per tumor, based on the most prevalent content found in patient-facing websites. Five metrics were assessed: Flesch Reading Ease, Flesch-Kincaid Grade Level, Gunning Fog Index, Coleman-Liau Index, and SMOG Index. Comparisons were conducted using Mann-Whitney U tests and t-tests.

RESULTS

Zero websites and ChatGPT responses met the readability benchmarks of 6th grade or below (AMA guideline) or 8th grade or below (NIH guideline). Of the websites, 50.4 % were at a 9th-12th grade level, 47.9 % at an undergraduate level, and 1.7 % at a graduate level. Websites reviewed by medical professionals had higher FRE (p = 0.03) and lower CLI (p = 0.009) compared to non-reviewed websites. Among ChatGPT responses, 93.3 % were graduate level. ChatGPT responses had lower readability than websites across all metrics (p < 0.001).

CONCLUSION

Online and ChatGPT-generated neuro-oncology materials exceed recommended readability standards, potentially hindering patients' ability to make informed decisions. Future efforts should focus on standardizing readability guidelines, refining AI-generated content, incorporating professional oversight consistently, and improving the accessibility of online neuro-oncology materials.

摘要

目的

在线医疗文献的内容往往超出普通人群的读写能力水平。我们的研究评估了关于胶质母细胞瘤、脑膜瘤和垂体腺瘤的在线及ChatGPT生成材料的可读性,并按肿瘤类型、机构隶属关系、作者身份和来源(网站与ChatGPT)对可读性进行比较。

方法

这项横断面研究在2024年11月使用谷歌浏览器进行搜索,搜索词为“[肿瘤类型]的预后”,每种肿瘤纳入前100条面向患者的英文搜索结果。网站按肿瘤、机构隶属关系(大学与非隶属)和作者身份(经医学专业人员审核与未经审核)进行分类。根据面向患者的网站中最常见的内容,针对每种肿瘤向ChatGPT 4.0提出三个标准化问题。评估了五个指标:弗莱什易读性指数、弗莱什-金凯德年级水平、冈宁雾度指数、科尔曼-廖指数和烟雾指数。使用曼-惠特尼U检验和t检验进行比较。

结果

没有网站和ChatGPT回复达到六年级及以下(美国医学协会指南)或八年级及以下(美国国立卫生研究院指南)的可读性基准。在网站中,50.4%处于九年级至十二年级水平,47.9%处于本科水平,1.7%处于研究生水平。与未经审核的网站相比,经医学专业人员审核的网站具有更高的弗莱什易读性指数(p = 0.03)和更低的科尔曼-廖指数(p = 0.009)。在ChatGPT回复中,93.3%处于研究生水平。ChatGPT回复在所有指标上的可读性均低于网站(p < 0.001)。

结论

在线及ChatGPT生成的神经肿瘤学材料超出了推荐的可读性标准,可能会阻碍患者做出明智决策。未来的工作应侧重于规范可读性指南、完善人工智能生成的内容、持续纳入专业监督以及提高在线神经肿瘤学材料的可及性。

相似文献

1
Online and ChatGPT-generated patient education materials regarding brain tumor prognosis fail to meet readability standards.关于脑肿瘤预后的在线及由ChatGPT生成的患者教育材料未达到可读性标准。
J Clin Neurosci. 2025 Aug;138:111410. doi: 10.1016/j.jocn.2025.111410. Epub 2025 Jun 20.
2
Is Information About Musculoskeletal Malignancies From Large Language Models or Web Resources at a Suitable Reading Level for Patients?来自大语言模型或网络资源的关于肌肉骨骼恶性肿瘤的信息对患者来说是否处于合适的阅读水平?
Clin Orthop Relat Res. 2025 Feb 1;483(2):306-315. doi: 10.1097/CORR.0000000000003263. Epub 2024 Sep 25.
3
Enhancing the Readability of Online Patient Education Materials Using Large Language Models: Cross-Sectional Study.使用大语言模型提高在线患者教育材料的可读性:横断面研究。
J Med Internet Res. 2025 Jun 4;27:e69955. doi: 10.2196/69955.
4
American Academy of Orthopaedic Surgeons OrthoInfo provides more readable information regarding rotator cuff injury than ChatGPT.美国矫形外科医师学会的OrthoInfo提供了比ChatGPT更具可读性的关于肩袖损伤的信息。
J ISAKOS. 2025 Feb 12;12:100841. doi: 10.1016/j.jisako.2025.100841.
5
Can Artificial Intelligence Improve the Readability of Patient Education Materials?人工智能能否提高患者教育材料的可读性?
Clin Orthop Relat Res. 2023 Nov 1;481(11):2260-2267. doi: 10.1097/CORR.0000000000002668. Epub 2023 Apr 28.
6
A joint effort: Evaluating the quality and readability of online resources relating to total hip arthroplasty.共同努力:评估与全髋关节置换术相关的在线资源的质量和可读性。
Surgeon. 2025 Aug;23(4):220-224. doi: 10.1016/j.surge.2025.02.016. Epub 2025 Mar 14.
7
Improving the Readability of Institutional Heart Failure-Related Patient Education Materials Using GPT-4: Observational Study.使用GPT-4提高机构性心力衰竭相关患者教育材料的可读性:观察性研究
JMIR Cardio. 2025 Jul 8;9:e68817. doi: 10.2196/68817.
8
Assessing Readability of Skin Cancer Screening Resources: A Comparison of Online Websites and ChatGPT Responses.评估皮肤癌筛查资源的可读性:在线网站与ChatGPT回复的比较
J Cancer Educ. 2025 Jul 1. doi: 10.1007/s13187-025-02683-2.
9
Currently Available Large Language Models Are Moderately Effective in Improving Readability of English and Spanish Patient Education Materials in Pediatric Orthopaedics.目前可用的大语言模型在提高儿科骨科英语和西班牙语患者教育材料的可读性方面有一定效果。
J Am Acad Orthop Surg. 2025 Jun 24. doi: 10.5435/JAAOS-D-25-00267.
10
Can artificial intelligence improve the readability of patient education information in gynecology?人工智能能否提高妇科患者教育信息的可读性?
Am J Obstet Gynecol. 2025 Jun 25. doi: 10.1016/j.ajog.2025.06.047.