• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

使用ChatGPT提高患者对肾小球疾病治疗的理解

Enhancing Patient Comprehension of Glomerular Disease Treatments Using ChatGPT.

作者信息

Abdelgadir Yasir H, Thongprayoon Charat, Craici Iasmina M, Cheungpasitporn Wisit, Miao Jing

机构信息

Division of Nephrology and Hypertension, Department of Medicine, Mayo Clinic, Rochester, MN 55905, USA.

出版信息

Healthcare (Basel). 2024 Dec 31;13(1):57. doi: 10.3390/healthcare13010057.

DOI:10.3390/healthcare13010057
PMID:39791664
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11719585/
Abstract

: It is often challenging for patients to understand treatment options, their mechanisms of action, and the potential side effects of each treatment option for glomerular disorders. This study explored the ability of ChatGPT to simplify these treatment options to enhance patient understanding. : GPT-4 was queried on sixty-seven glomerular disorders using two distinct queries for a general explanation and an explanation adjusted for an 8th grade level or lower. Accuracy was rated on a scale of 1 (incorrect) to 5 (correct and comprehensive). Readability was measured using the average of the Flesch-Kincaid Grade (FKG) and SMOG indices, along with the Flesch Reading Ease (FRE) score. The understandability score (%) was determined using the Patient Education Materials Assessment Tool for Printable Materials (PEMAT-P). : GPT-4's general explanations had an average readability level of 12.85 ± 0.93, corresponding to the upper end of high school. When tailored for patients at or below an 8th-grade level, the readability improved to a middle school level of 8.44 ± 0.72. The FRE and PEMAT-P scores also reflected improved readability and understandability, increasing from 25.73 ± 6.98 to 60.75 ± 4.56 and from 60.7% to 76.8% ( < 0.0001 for both), respectively. The accuracy of GPT-4's tailored explanations was significantly lower compared to the general explanations (3.99 ± 0.39 versus 4.56 ± 0.66, < 0.0001). : ChatGPT shows significant potential for enhancing the readability and understandability of glomerular disorder therapies for patients, but at a cost of reduced comprehensiveness. Further research is needed to refine the performance, evaluate the real-world impact, and ensure the ethical use of ChatGPT in healthcare settings.

摘要

对于患者来说,理解肾小球疾病的治疗方案、其作用机制以及每种治疗方案的潜在副作用往往具有挑战性。本研究探讨了ChatGPT简化这些治疗方案以提高患者理解能力的情况。针对67种肾小球疾病向GPT-4进行了查询,使用了两个不同的查询,一个是一般性解释,另一个是针对八年级及以下水平调整后的解释。准确性按1(错误)至5(正确且全面)的等级进行评分。可读性使用弗莱施-金凯德年级水平(FKG)和雾度指数的平均值以及弗莱施阅读简易度(FRE)得分来衡量。可理解性得分(%)使用针对印刷材料的患者教育材料评估工具(PEMAT-P)来确定。GPT-4的一般性解释的平均可读性水平为12.85±0.93,相当于高中高年级水平。当为八年级及以下水平的患者量身定制时,可读性提高到了初中水平的8.44±0.72。FRE和PEMAT-P得分也反映出可读性和可理解性有所提高,分别从25.73±6.98提高到60.75±4.56,从60.7%提高到76.8%(两者均<0.0001)。与一般性解释相比,GPT-4量身定制的解释的准确性显著降低(3.99±0.39对4.56±0.66,<0.0001)。ChatGPT在提高患者对肾小球疾病治疗的可读性和可理解性方面显示出巨大潜力,但代价是全面性降低。需要进一步研究来优化其性能、评估实际影响,并确保在医疗环境中道德地使用ChatGPT。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7d3a/11719585/58c5a4437e15/healthcare-13-00057-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7d3a/11719585/f816809e9106/healthcare-13-00057-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7d3a/11719585/58c5a4437e15/healthcare-13-00057-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7d3a/11719585/f816809e9106/healthcare-13-00057-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7d3a/11719585/58c5a4437e15/healthcare-13-00057-g002.jpg

相似文献

1
Enhancing Patient Comprehension of Glomerular Disease Treatments Using ChatGPT.使用ChatGPT提高患者对肾小球疾病治疗的理解
Healthcare (Basel). 2024 Dec 31;13(1):57. doi: 10.3390/healthcare13010057.
2
Evaluating the Efficacy of ChatGPT as a Patient Education Tool in Prostate Cancer: Multimetric Assessment.评估 ChatGPT 在前列腺癌患者教育中的疗效:多指标评估。
J Med Internet Res. 2024 Aug 14;26:e55939. doi: 10.2196/55939.
3
Evaluating the accuracy and readability of ChatGPT in providing parental guidance for adenoidectomy, tonsillectomy, and ventilation tube insertion surgery.评估 ChatGPT 在提供腺样体切除术、扁桃体切除术和通气管插入手术的家长指导方面的准确性和可读性。
Int J Pediatr Otorhinolaryngol. 2024 Jun;181:111998. doi: 10.1016/j.ijporl.2024.111998. Epub 2024 May 31.
4
Assessing the Quality and Reliability of ChatGPT's Responses to Radiotherapy-Related Patient Queries: Comparative Study With GPT-3.5 and GPT-4.评估ChatGPT对放疗相关患者问题回答的质量和可靠性:与GPT-3.5和GPT-4的比较研究
JMIR Cancer. 2025 Apr 16;11:e63677. doi: 10.2196/63677.
5
Evaluation of Generative Language Models in Personalizing Medical Information: Instrument Validation Study.生成式语言模型在个性化医疗信息方面的评估:工具验证研究
JMIR AI. 2024 Aug 13;3:e54371. doi: 10.2196/54371.
6
Artificial Intelligence-Prompted Explanations of Common Primary Care Diagnoses.人工智能辅助的常见初级保健诊断解释
PRiMER. 2024 Sep 17;8:51. doi: 10.22454/PRiMER.2024.916089. eCollection 2024.
7
Assessing readability of explanations and reliability of answers by GPT-3.5 and GPT-4 in non-traumatic spinal cord injury education.评估GPT-3.5和GPT-4在非创伤性脊髓损伤教育中所提供解释的可读性及答案的可靠性。
Med Teach. 2025 Jan 20:1-8. doi: 10.1080/0142159X.2024.2430365.
8
Unlocking the future of patient Education: ChatGPT vs. LexiComp® as sources of patient education materials.开启患者教育的未来:ChatGPT与LexiComp®作为患者教育材料来源的比较
J Am Pharm Assoc (2003). 2025 Jan-Feb;65(1):102119. doi: 10.1016/j.japh.2024.102119. Epub 2024 May 8.
9
Performance of Artificial Intelligence Chatbots in Responding to Patient Queries Related to Traumatic Dental Injuries: A Comparative Study.人工智能聊天机器人在回应与创伤性牙损伤相关的患者咨询中的表现:一项比较研究。
Dent Traumatol. 2025 Jun;41(3):338-347. doi: 10.1111/edt.13020. Epub 2024 Nov 22.
10
The promising role of chatbots in keratorefractive surgery patient education.聊天机器人在角膜屈光手术患者教育中的潜在作用。
J Fr Ophtalmol. 2025 Feb;48(2):104381. doi: 10.1016/j.jfo.2024.104381. Epub 2024 Dec 13.

引用本文的文献

1
Advancing health equity: evaluating AI translations of kidney donor information for Spanish speakers.推进健康公平:评估面向西班牙语使用者的肾脏捐赠者信息的人工智能翻译。
Front Public Health. 2025 Jan 27;13:1484790. doi: 10.3389/fpubh.2025.1484790. eCollection 2025.

本文引用的文献

1
Acceptability and readability of ChatGPT-4 based responses for frequently asked questions about strabismus and amblyopia.基于ChatGPT-4的斜视和弱视常见问题回答的可接受性与可读性。
J Fr Ophtalmol. 2025 Mar;48(3):104400. doi: 10.1016/j.jfo.2024.104400. Epub 2024 Dec 20.
2
Comparative Evaluation of Information Quality on Colon Cancer for Patients: A Study of ChatGPT-4 and Google.患者结肠癌信息质量的比较评估:ChatGPT-4与谷歌的研究
Cureus. 2024 Nov 19;16(11):e73989. doi: 10.7759/cureus.73989. eCollection 2024 Nov.
3
Assessing the Quality, Readability, and Acceptability of AI-Generated Information in Plastic and Aesthetic Surgery.
评估整形与美容外科中人工智能生成信息的质量、可读性和可接受性。
Cureus. 2024 Nov 17;16(11):e73874. doi: 10.7759/cureus.73874. eCollection 2024 Nov.
4
ChatGPT-4o's performance on pediatric Vesicoureteral reflux.ChatGPT-4o在小儿膀胱输尿管反流方面的表现。
J Pediatr Urol. 2025 Apr;21(2):504-509. doi: 10.1016/j.jpurol.2024.12.002. Epub 2024 Dec 7.
5
Exploring the Role of Large Language Models in Melanoma: A Systematic Review.探索大语言模型在黑色素瘤中的作用:一项系统综述
J Clin Med. 2024 Dec 9;13(23):7480. doi: 10.3390/jcm13237480.
6
Leveraging large language models to improve patient education on dry eye disease.利用大语言模型改善干眼症患者教育。
Eye (Lond). 2025 Apr;39(6):1115-1122. doi: 10.1038/s41433-024-03476-5. Epub 2024 Dec 16.
7
Assessing AI Simplification of Medical Texts: Readability and Content Fidelity.评估人工智能对医学文本的简化:可读性与内容保真度。
Int J Med Inform. 2025 Mar;195:105743. doi: 10.1016/j.ijmedinf.2024.105743. Epub 2024 Dec 1.
8
Can people with epilepsy trust AI chatbots for information on physical exercise?癫痫患者能信任人工智能聊天机器人获取有关体育锻炼的信息吗?
Epilepsy Behav. 2025 Feb;163:110193. doi: 10.1016/j.yebeh.2024.110193. Epub 2024 Dec 4.
9
Assessing the efficacy of artificial intelligence to provide peri-operative information for patients with a stoma.评估人工智能为造口患者提供围手术期信息的效果。
ANZ J Surg. 2025 Mar;95(3):464-496. doi: 10.1111/ans.19337. Epub 2024 Dec 2.
10
The use of ChatGPT was found to improve Mohs micrographic patient instructions readability.研究发现,使用ChatGPT可提高莫氏显微描记法患者指导说明的可读性。
Int J Dermatol. 2025 Aug;64(8):1451-1452. doi: 10.1111/ijd.17589. Epub 2024 Nov 22.