• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过使用ChatGPT简化人工智能生成的牙科放射学报告来改善患者沟通:比较研究

Improving Patient Communication by Simplifying AI-Generated Dental Radiology Reports With ChatGPT: Comparative Study.

作者信息

Stephan Daniel, Bertsch Annika S, Schumacher Sophia, Puladi Behrus, Burwinkel Matthias, Al-Nawas Bilal, Kämmerer Peer W, Thiem Daniel Ge

机构信息

Department of Oral and Maxillofacial Surgery, Facial Plastic Surgery, University Medical Centre, Johannes Gutenberg-University Mainz, Mainz, Germany.

Department of Oral and Maxillofacial Surgery, University Hospital Rheinisch-Westfälische Technische Hochschule Aachen, Aachen, Germany.

出版信息

J Med Internet Res. 2025 Jun 9;27:e73337. doi: 10.2196/73337.

DOI:10.2196/73337
PMID:40489773
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12186002/
Abstract

BACKGROUND

Medical reports, particularly radiology findings, are often written for professional communication, making them difficult for patients to understand. This communication barrier can reduce patient engagement and lead to misinterpretation. Artificial intelligence (AI), especially large language models such as ChatGPT, offers new opportunities for simplifying medical documentation to improve patient comprehension.

OBJECTIVE

We aimed to evaluate whether AI-generated radiology reports simplified by ChatGPT improve patient understanding, readability, and communication quality compared to original AI-generated reports.

METHODS

In total, 3 versions of radiology reports were created using ChatGPT: an original AI-generated version (text 1), a patient-friendly, simplified version (text 2), and a further simplified and accessibility-optimized version (text 3). A total of 300 patients (n=100, 33.3% per group), excluding patients with medical education, were randomly assigned to review one text version and complete a standardized questionnaire. Readability was assessed using the Flesch Reading Ease (FRE) score and LIX indices.

RESULTS

Both simplified texts showed significantly higher readability scores (text 1: FRE score=51.1; text 2: FRE score=55.0; and text 3: FRE score=56.4; P<.001) and lower LIX scores, indicating enhanced clarity. Text 3 had the shortest sentences, had the fewest long words, and scored best on all patient-rated dimensions. Questionnaire results revealed significantly higher ratings for texts 2 and 3 across clarity (P<.001), tone (P<.001), structure, and patient engagement. For example, patients rated the ability to understand findings without help highest for text 3 (mean 1.5, SD 0.7) and lowest for text 1 (mean 3.1, SD 1.4). Both simplified texts significantly improved patients' ability to prepare for clinical conversations and promoted shared decision-making.

CONCLUSIONS

AI-generated simplification of radiology reports significantly enhances patient comprehension and engagement. These findings highlight the potential of ChatGPT as a tool to improve patient-centered communication. While promising, future research should focus on ensuring clinical accuracy and exploring applications across diverse patient populations to support equitable and effective integration of AI in health care communication.

摘要

背景

医学报告,尤其是放射学检查结果,通常是用于专业交流,患者难以理解。这种沟通障碍会降低患者参与度并导致误解。人工智能(AI),尤其是像ChatGPT这样的大语言模型,为简化医学文档以提高患者理解度提供了新机会。

目的

我们旨在评估与原始的由人工智能生成的报告相比,由ChatGPT简化生成的放射学报告是否能提高患者的理解度、可读性和沟通质量。

方法

使用ChatGPT总共创建了3个版本的放射学报告:原始的由人工智能生成的版本(文本1)、患者友好的简化版本(文本2)以及进一步简化且优化了可及性的版本(文本3)。总共300名患者(每组n = 100,占33.3%),排除接受过医学教育的患者,被随机分配阅读一个文本版本并完成一份标准化问卷。使用弗莱什易读性(FRE)得分和LIX指数评估可读性。

结果

两个简化后的文本均显示出显著更高的可读性得分(文本1:FRE得分 = 51.1;文本2:FRE得分 = 55.0;文本3:FRE得分 = 56.4;P <.001)和更低的LIX得分,表明清晰度提高。文本3的句子最短,长单词最少,在所有患者评分维度上得分最高。问卷结果显示,文本2和文本3在清晰度(P <.001)、语气(P <.001)、结构和患者参与度方面的评分显著更高。例如,患者对文本3在无需帮助就能理解检查结果的能力方面评分最高(平均1.5,标准差0.7),对文本1评分最低(平均3.1,标准差1.4)。两个简化后的文本均显著提高了患者为临床对话做准备的能力,并促进了共同决策。

结论

人工智能生成的放射学报告简化显著提高了患者的理解度和参与度。这些发现凸显了ChatGPT作为一种改善以患者为中心的沟通工具的潜力。虽然前景广阔,但未来的研究应专注于确保临床准确性,并探索在不同患者群体中的应用,以支持人工智能在医疗保健沟通中的公平和有效整合。

相似文献

1
Improving Patient Communication by Simplifying AI-Generated Dental Radiology Reports With ChatGPT: Comparative Study.通过使用ChatGPT简化人工智能生成的牙科放射学报告来改善患者沟通:比较研究
J Med Internet Res. 2025 Jun 9;27:e73337. doi: 10.2196/73337.
2
Enhancing the Readability of Online Patient Education Materials Using Large Language Models: Cross-Sectional Study.使用大语言模型提高在线患者教育材料的可读性:横断面研究。
J Med Internet Res. 2025 Jun 4;27:e69955. doi: 10.2196/69955.
3
American Academy of Orthopaedic Surgeons OrthoInfo provides more readable information regarding rotator cuff injury than ChatGPT.美国矫形外科医师学会的OrthoInfo提供了比ChatGPT更具可读性的关于肩袖损伤的信息。
J ISAKOS. 2025 Feb 12;12:100841. doi: 10.1016/j.jisako.2025.100841.
4
Large Language Model-Assisted Surgical Consent Forms in Non-English Language: Content Analysis and Readability Evaluation.非英语语言的大语言模型辅助手术同意书:内容分析与可读性评估
J Med Internet Res. 2025 Jun 19;27:e73222. doi: 10.2196/73222.
5
Comparison of ChatGPT and Internet Research for Clinical Research and Decision-Making in Occupational Medicine: Randomized Controlled Trial.ChatGPT与互联网搜索用于职业医学临床研究和决策的比较:随机对照试验
JMIR Form Res. 2025 May 20;9:e63857. doi: 10.2196/63857.
6
Signs and symptoms to determine if a patient presenting in primary care or hospital outpatient settings has COVID-19.在基层医疗机构或医院门诊环境中,如果患者出现以下症状和体征,可判断其是否患有 COVID-19。
Cochrane Database Syst Rev. 2022 May 20;5(5):CD013665. doi: 10.1002/14651858.CD013665.pub3.
7
Evaluating the readability, quality, and reliability of responses generated by ChatGPT, Gemini, and Perplexity on the most commonly asked questions about Ankylosing spondylitis.评估ChatGPT、Gemini和Perplexity针对强直性脊柱炎最常见问题生成的回答的可读性、质量和可靠性。
PLoS One. 2025 Jun 18;20(6):e0326351. doi: 10.1371/journal.pone.0326351. eCollection 2025.
8
Clinical Management of Wasp Stings Using Large Language Models: Cross-Sectional Evaluation Study.使用大语言模型对黄蜂蜇伤进行临床管理:横断面评估研究
J Med Internet Res. 2025 Jun 4;27:e67489. doi: 10.2196/67489.
9
Using Natural Language Processing to Explore Patient Perspectives on AI Avatars in Support Materials for Patients With Breast Cancer: Survey Study.使用自然语言处理技术探索乳腺癌患者在支持材料中对人工智能化身的看法:调查研究
J Med Internet Res. 2025 Jun 20;27:e70971. doi: 10.2196/70971.
10
Bridging Health Literacy Gaps in Spine Care: Using ChatGPT-4o to Improve Patient-Education Materials.弥合脊柱护理中的健康素养差距:利用ChatGPT-4o改进患者教育材料。
J Bone Joint Surg Am. 2025 Jun 19. doi: 10.2106/JBJS.24.01484.

本文引用的文献

1
Assessing Racial and Ethnic Bias in Text Generation by Large Language Models for Health Care-Related Tasks: Cross-Sectional Study.大型语言模型在医疗相关任务的文本生成中种族和民族偏见的评估:横断面研究。
J Med Internet Res. 2025 Mar 13;27:e57257. doi: 10.2196/57257.
2
AI in Dental Radiology-Improving the Efficiency of Reporting With ChatGPT: Comparative Study.牙科放射学中的人工智能——利用ChatGPT提高报告效率:比较研究
J Med Internet Res. 2024 Dec 23;26:e60684. doi: 10.2196/60684.
3
Readability of Patient Education Materials in Head and Neck Cancer: A Systematic Review.
头颈癌患者教育材料的可读性:一项系统综述
JAMA Otolaryngol Head Neck Surg. 2024 Aug 1;150(8):713-724. doi: 10.1001/jamaoto.2024.1569.
4
Racism is an ethical issue for healthcare artificial intelligence.种族主义是医疗保健人工智能的一个伦理问题。
Cell Rep Med. 2024 Jun 18;5(6):101617. doi: 10.1016/j.xcrm.2024.101617.
5
Using ChatGPT-4 to Create Structured Medical Notes From Audio Recordings of Physician-Patient Encounters: Comparative Study.利用 ChatGPT-4 从医患对话的音频记录中创建结构化的医疗记录:比较研究。
J Med Internet Res. 2024 Apr 22;26:e54419. doi: 10.2196/54419.
6
The future of radiology and radiologists: AI is pivotal but not the only change afoot.放射学和放射科医生的未来:人工智能是关键,但不是唯一的变化。
J Med Imaging Radiat Sci. 2024 Dec;55(4):101377. doi: 10.1016/j.jmir.2024.02.002. Epub 2024 Feb 24.
7
Readability of cerebrovascular diseases online educational material from major cerebrovascular organizations.主要脑血管组织的脑血管疾病在线教育材料的可读性
J Neurointerv Surg. 2024 Feb 23. doi: 10.1136/jnis-2023-021205.
8
Can ChatGPT assist authors with abstract writing in medical journals? Evaluating the quality of scientific abstracts generated by ChatGPT and original abstracts.ChatGPT 能否协助医学期刊的作者撰写摘要?评估 ChatGPT 生成的科学摘要和原始摘要的质量。
PLoS One. 2024 Feb 14;19(2):e0297701. doi: 10.1371/journal.pone.0297701. eCollection 2024.
9
ChatGPT's Ability to Assist with Clinical Documentation: A Randomized Controlled Trial.ChatGPT 在临床文档中的辅助能力:一项随机对照试验。
J Am Acad Orthop Surg. 2024 Feb 1;32(3):123-129. doi: 10.5435/JAAOS-D-23-00474. Epub 2023 Nov 17.
10
Examining ChatGPT Performance on USMLE Sample Items and Implications for Assessment.考察 ChatGPT 在 USMLE 样题上的表现及对评估的启示
Acad Med. 2024 Feb 1;99(2):192-197. doi: 10.1097/ACM.0000000000005549. Epub 2023 Nov 7.