• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

使用ChatGPT-4.0简化乳腺病理报告:可读性与准确性研究

The Use of ChatGPT-4.0 to Simplify Breast Pathology Reports: A Study on Readability and Accuracy.

作者信息

Bheemireddy Samhita, Leslie Sarah E, Durden Jakob A, Burnet George, Aryanpour Zain, Fong Ashlyn, Higgins Madeline G, Greenseid Samantha, McLemore Lauren, Li Gande, Miles Randy, Taft Nancy, Tevis Sarah

机构信息

Albany Medical College, Albany Medical Center, Albany, NY, USA.

Adult and Child Center for Outcomes Research and Delivery Science (ACCORDS), University of Colorado Anschutz Medical Campus, Aurora, CO, USA.

出版信息

Ann Surg Oncol. 2025 Jul 21. doi: 10.1245/s10434-025-17860-2.

DOI:10.1245/s10434-025-17860-2
PMID:40690168
Abstract

BACKGROUND

Patients have immediate access to their diagnostic reports but these reports exceed the recommended reading level for patient-facing materials. Generative artificial intelligence may be a tool for improving patient comprehension of health information. This study assessed the readability and accuracy of ChatGPT-simplified breast pathology reports.

METHODS

Ten de-identified patient breast pathology reports were simplified by ChatGPT-4.0 using three different prompts. Prompt 1 requested simplification, Prompt 2 added a 6th-grade-level specification, and Prompt 3 requested essential information. The Flesch-Kincaid Reading Level (FKRL) and Flesch Reading Ease Score (FRES) were utilized to quantify readability and ease of reading, respectively. Five physicians used a four-point scale to assess factual correctness, relevancy, and fabrications to determine overall accuracy. Mean scores and standard deviations for FKRL, FRES, and accuracy scores were compared using analysis of variance (ANOVA) and t-tests.

RESULTS

Prompt 2 demonstrated a reduction in FKRL (p < 0.001) and an increase in FRES (p < 0.001), demonstrating improved readability and ease of reading. ChatGPT-simplified reports received an overall accuracy score of 3.59/4 (standard deviation [SD] ± 0.17). The scores by rubric category were 3.62 (SD ± 0.31) for factual correctness (4 = completely correct), 3.27 (SD ± 0.44) for relevancy (4 = completely relevant), and 3.89 (SD ± 0.11) for fabricated information (4 = no fabricated information).

CONCLUSIONS

ChatGPT simplified breast pathology reports to the reading level recommended for patient-facing materials when given a grade-level specification while mostly maintaining accuracy. To minimize the risk of medically inaccurate and/or misleading information, ChatGPT-simplified reports should be reviewed before dissemination.

摘要

背景

患者可立即获取其诊断报告,但这些报告超出了面向患者材料的推荐阅读水平。生成式人工智能可能是提高患者对健康信息理解的一种工具。本研究评估了ChatGPT简化的乳腺病理报告的可读性和准确性。

方法

ChatGPT-4.0使用三种不同提示对10份去识别化的患者乳腺病理报告进行简化。提示1要求简化,提示2添加了六年级水平的规范,提示3要求提供基本信息。分别使用弗莱施-金凯德阅读水平(FKRL)和弗莱施阅读易读性评分(FRES)来量化可读性和易读性。五名医生使用四点量表评估事实正确性、相关性和虚构内容,以确定总体准确性。使用方差分析(ANOVA)和t检验比较FKRL、FRES和准确性评分的平均分数和标准差。

结果

提示2显示FKRL降低(p < 0.001),FRES增加(p < 0.001),表明可读性和易读性得到改善。ChatGPT简化的报告总体准确性得分为3.59/4(标准差[SD]±0.17)。按评分标准类别划分的分数分别为:事实正确性3.62(SD±0.31)(4 = 完全正确)、相关性3.27(SD±0.44)(4 = 完全相关)、虚构信息3.89(SD±0.11)(4 = 无虚构信息)。

结论

当给出年级水平规范时,ChatGPT将乳腺病理报告简化到了面向患者材料推荐的阅读水平,同时基本保持了准确性。为了将医学上不准确和/或误导性信息的风险降至最低,ChatGPT简化的报告在传播前应进行审核。

相似文献

1
The Use of ChatGPT-4.0 to Simplify Breast Pathology Reports: A Study on Readability and Accuracy.使用ChatGPT-4.0简化乳腺病理报告:可读性与准确性研究
Ann Surg Oncol. 2025 Jul 21. doi: 10.1245/s10434-025-17860-2.
2
Enhancing the Readability of Online Patient Education Materials Using Large Language Models: Cross-Sectional Study.使用大语言模型提高在线患者教育材料的可读性:横断面研究。
J Med Internet Res. 2025 Jun 4;27:e69955. doi: 10.2196/69955.
3
Can Artificial Intelligence Improve the Readability of Patient Education Materials?人工智能能否提高患者教育材料的可读性?
Clin Orthop Relat Res. 2023 Nov 1;481(11):2260-2267. doi: 10.1097/CORR.0000000000002668. Epub 2023 Apr 28.
4
Artificial Intelligence in Peripheral Artery Disease Education: A Battle Between ChatGPT and Google Gemini.外周动脉疾病教育中的人工智能:ChatGPT与谷歌Gemini的较量
Cureus. 2025 Jun 1;17(6):e85174. doi: 10.7759/cureus.85174. eCollection 2025 Jun.
5
Artificial Intelligence Shows Limited Success in Improving Readability Levels of Spanish-language Orthopaedic Patient Education Materials.人工智能在提高西班牙语骨科患者教育材料的可读性方面成效有限。
Clin Orthop Relat Res. 2025 Feb 11. doi: 10.1097/CORR.0000000000003413.
6
Using Artificial Intelligence ChatGPT to Access Medical Information about Chemical Eye Injuries: A Comparative Study.使用人工智能ChatGPT获取有关化学性眼外伤的医学信息:一项比较研究。
JMIR Form Res. 2025 Jun 30. doi: 10.2196/73642.
7
Is Information About Musculoskeletal Malignancies From Large Language Models or Web Resources at a Suitable Reading Level for Patients?来自大语言模型或网络资源的关于肌肉骨骼恶性肿瘤的信息对患者来说是否处于合适的阅读水平?
Clin Orthop Relat Res. 2025 Feb 1;483(2):306-315. doi: 10.1097/CORR.0000000000003263. Epub 2024 Sep 25.
8
Can artificial intelligence improve the readability of patient education information in gynecology?人工智能能否提高妇科患者教育信息的可读性?
Am J Obstet Gynecol. 2025 Jun 25. doi: 10.1016/j.ajog.2025.06.047.
9
Evaluation of Information Provided by ChatGPT Versions on Traumatic Dental Injuries for Dental Students and Professionals.评估ChatGPT不同版本为牙科学生和专业人员提供的有关创伤性牙损伤的信息。
Dent Traumatol. 2025 Aug;41(4):427-436. doi: 10.1111/edt.13042. Epub 2025 Jan 23.
10
Accuracy and Readability of ChatGPT Responses to Patient-Centric Strabismus Questions.ChatGPT对以患者为中心的斜视问题的回答的准确性和可读性。
J Pediatr Ophthalmol Strabismus. 2025 May-Jun;62(3):220-227. doi: 10.3928/01913913-20250110-02. Epub 2025 Feb 19.

本文引用的文献

1
Health literacy and all-cause mortality among cancer patients.癌症患者的健康素养与全因死亡率
Cancer. 2025 Mar 15;131(6):e35794. doi: 10.1002/cncr.35794.
2
Patients' Trust in Health Systems to Use Artificial Intelligence.患者对医疗系统使用人工智能的信任。
JAMA Netw Open. 2025 Feb 3;8(2):e2460628. doi: 10.1001/jamanetworkopen.2024.60628.
3
Embrace with caution: The limitations of generative artificial intelligence in responding to patient health care queries.谨慎接纳:生成式人工智能在回应患者医疗保健问题方面的局限性
Cancer. 2025 Jan 1;131(1):e35651. doi: 10.1002/cncr.35651. Epub 2024 Nov 19.
4
Generative artificial intelligence as a source of breast cancer information for patients: Proceed with caution.生成式人工智能作为患者乳腺癌信息的来源:谨慎行事。
Cancer. 2025 Jan 1;131(1):e35521. doi: 10.1002/cncr.35521. Epub 2024 Aug 30.
5
Large language models in health care: Development, applications, and challenges.医疗保健领域的大语言模型:发展、应用与挑战。
Health Care Sci. 2023 Jul 24;2(4):255-263. doi: 10.1002/hcs2.61. eCollection 2023 Aug.
6
Preparing for an Artificial Intelligence-Enabled Future: Patient Perspectives on Engagement and Health Care Professional Training for Adopting Artificial Intelligence Technologies in Health Care Settings.为人工智能时代做准备:患者对医疗保健环境中采用人工智能技术的参与度及医护人员培训的看法
JMIR AI. 2023 Mar 2;2:e40973. doi: 10.2196/40973.
7
Use of Artificial Intelligence Chatbots in Interpretation of Pathology Reports.人工智能聊天机器人在病理报告解读中的应用。
JAMA Netw Open. 2024 May 1;7(5):e2412767. doi: 10.1001/jamanetworkopen.2024.12767.
8
The Role of AI in Hospitals and Clinics: Transforming Healthcare in the 21st Century.人工智能在医院和诊所中的作用:变革21世纪的医疗保健
Bioengineering (Basel). 2024 Mar 29;11(4):337. doi: 10.3390/bioengineering11040337.
9
From technical to understandable: Artificial Intelligence Large Language Models improve the readability of knee radiology reports.从技术到易懂:人工智能大语言模型提高了膝关节放射学报告的可读性。
Knee Surg Sports Traumatol Arthrosc. 2024 May;32(5):1077-1086. doi: 10.1002/ksa.12133. Epub 2024 Mar 15.
10
ChatGPT vs. web search for patient questions: what does ChatGPT do better?ChatGPT 与网页搜索在解答患者问题上的对比:ChatGPT 有哪些优势?
Eur Arch Otorhinolaryngol. 2024 Jun;281(6):3219-3225. doi: 10.1007/s00405-024-08524-0. Epub 2024 Feb 28.