文献检索文档翻译深度研究
Suppr Zotero 插件Zotero 插件
邀请有礼套餐&价格历史记录

新学期,新优惠

限时优惠:9月1日-9月22日

30天高级会员仅需29元

1天体验卡首发特惠仅需5.99元

了解详情
不再提醒
插件&应用
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
高级版
套餐订阅购买积分包
AI 工具
文献检索文档翻译深度研究
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2025

Assessing the Quality of Patient Education Materials on Cardiac Catheterization From Artificial Intelligence Chatbots: An Observational Cross-Sectional Study.

作者信息

Behers Benjamin J, Stephenson-Moe Christoph A, Gibons Rebecca M, Vargas Ian A, Wojtas Caroline N, Rosario Manuel A, Anneaud Djhemson, Nord Profilia, Hamad Karen M, Baker Joel F

机构信息

Department of Internal Medicine, Sarasota Memorial Hospital, Sarasota, USA.

Department of Clinical Sciences, Florida State University College of Medicine, Tallahassee, USA.

出版信息

Cureus. 2024 Sep 23;16(9):e69996. doi: 10.7759/cureus.69996. eCollection 2024 Sep.


DOI:10.7759/cureus.69996
PMID:39445289
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11498076/
Abstract

Background Health literacy empowers patients to participate in their own healthcare. Personal health literacy is one's ability to find, understand, and use information/resources to make well-informed health decisions. Artificial intelligence (AI) has become a source for the acquisition of health-related information through large language model (LLM)-driven chatbots. Assessment of the readability and quality of health information produced by these chatbots has been the subject of numerous studies to date. This study seeks to assess the quality of patient education materials on cardiac catheterization produced by AI chatbots. Methodology We asked a set of 10 questions about cardiac catheterization to four chatbots: ChatGPT (OpenAI, San Francisco, CA), Microsoft Copilot (Microsoft Corporation, Redmond, WA), Google Gemini (Google DeepMind, London, UK), and Meta AI (Meta, New York, NY). The questions and subsequent answers were utilized to make patient education materials on cardiac catheterization. The quality of these materials was assessed using two validated instruments for patient education materials: DISCERN and the Patient Education Materials Assessment Tool (PEMAT). Results The overall DISCERN scores were 4.5 for ChatGPT, 4.4 for Microsoft Copilot and Google Gemini, and 3.8 for Meta AI. ChatGPT, Microsoft Copilot, and Google Gemini tied for the highest reliability score at 4.6, while Meta AI had the lowest with 4.2. ChatGPT had the highest quality score at 4.4, while Meta AI had the lowest with 3.4. ChatGPT and Google Gemini had Understandability scores of 100%, while Meta AI had the lowest with 82%. ChatGPT, Microsoft Copilot, and Google Gemini all had Actionability scores of 75%, while Meta AI had one of 50%. Conclusions ChatGPT produced the most reliable and highest quality materials, followed closely by Google Gemini. Meta AI produced the lowest quality materials. Given the easy accessibility that chatbots provide patients and the high-quality responses that we obtained, they could be a reliable source for patients to obtain information about cardiac catheterization.

摘要
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/888c/11498076/33ac7799733b/cureus-0016-00000069996-i01.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/888c/11498076/33ac7799733b/cureus-0016-00000069996-i01.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/888c/11498076/33ac7799733b/cureus-0016-00000069996-i01.jpg

相似文献

[1]
Assessing the Quality of Patient Education Materials on Cardiac Catheterization From Artificial Intelligence Chatbots: An Observational Cross-Sectional Study.

Cureus. 2024-9-23

[2]
Assessing the Readability of Patient Education Materials on Cardiac Catheterization From Artificial Intelligence Chatbots: An Observational Cross-Sectional Study.

Cureus. 2024-7-4

[3]
Assessment of readability, reliability, and quality of ChatGPT®, BARD®, Gemini®, Copilot®, Perplexity® responses on palliative care.

Medicine (Baltimore). 2024-8-16

[4]
Comparative accuracy of ChatGPT-4, Microsoft Copilot and Google Gemini in the Italian entrance test for healthcare sciences degrees: a cross-sectional study.

BMC Med Educ. 2024-6-26

[5]
Accuracy and Readability of Artificial Intelligence Chatbot Responses to Vasectomy-Related Questions: Public Beware.

Cureus. 2024-8-28

[6]
Can artificial intelligence models serve as patient information consultants in orthodontics?

BMC Med Inform Decis Mak. 2024-7-29

[7]
Assessment of Artificial Intelligence Chatbot Responses to Top Searched Queries About Cancer.

JAMA Oncol. 2023-10-1

[8]
Comparative Analysis of Accuracy, Readability, Sentiment, and Actionability: Artificial Intelligence Chatbots (ChatGPT and Google Gemini) versus Traditional Patient Information Leaflets for Local Anesthesia in Eye Surgery.

Br Ir Orthopt J. 2024-8-19

[9]
Chatbots talk Strabismus: Can AI become the new patient Educator?

Int J Med Inform. 2024-11

[10]
Evaluating the Efficacy of ChatGPT as a Patient Education Tool in Prostate Cancer: Multimetric Assessment.

J Med Internet Res. 2024-8-14

引用本文的文献

[1]
Using large language models to generate child-friendly education materials on myopia.

Digit Health. 2025-7-30

[2]
Artificial intelligence in pediatric dental trauma: do artificial intelligence chatbots address parental concerns effectively?

BMC Oral Health. 2025-5-17

[3]
Assessing the quality and readability of patient education materials on chemotherapy cardiotoxicity from artificial intelligence chatbots: An observational cross-sectional study.

Medicine (Baltimore). 2025-4-11

本文引用的文献

[1]
Still Using Only ChatGPT? The Comparison of Five Different Artificial Intelligence Chatbots' Answers to the Most Common Questions About Kidney Stones.

J Endourol. 2024-11

[2]
Performance of ChatGPT 3.5 and 4 as a tool for patient support before and after DBS surgery for Parkinson's disease.

Neurol Sci. 2024-12

[3]
Comparative Analysis of Accuracy, Readability, Sentiment, and Actionability: Artificial Intelligence Chatbots (ChatGPT and Google Gemini) versus Traditional Patient Information Leaflets for Local Anesthesia in Eye Surgery.

Br Ir Orthopt J. 2024-8-19

[4]
Evaluating ChatGPT platform in delivering heart failure educational material: A comparison with the leading national cardiology institutes.

Curr Probl Cardiol. 2024-11

[5]
Assessment of readability, reliability, and quality of ChatGPT®, BARD®, Gemini®, Copilot®, Perplexity® responses on palliative care.

Medicine (Baltimore). 2024-8-16

[6]
Assessing the Readability of Patient Education Materials on Cardiac Catheterization From Artificial Intelligence Chatbots: An Observational Cross-Sectional Study.

Cureus. 2024-7-4

[7]
Comparing patient education tools for chronic pain medications: Artificial intelligence chatbot versus traditional patient information leaflets.

Indian J Anaesth. 2024-7

[8]
ChatGPT-4 Performs Clinical Information Retrieval Tasks Using Consistently More Trustworthy Resources Than Does Google Search for Queries Concerning the Latarjet Procedure.

Arthroscopy. 2025-3

[9]
Exploring the Role of ChatGPT in Cardiology: A Systematic Review of the Current Literature.

Cureus. 2024-4-24

[10]
Assessing the Responses of Large Language Models (ChatGPT-4, Gemini, and Microsoft Copilot) to Frequently Asked Questions in Breast Imaging: A Study on Readability and Accuracy.

Cureus. 2024-5-9

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

推荐工具

医学文档翻译智能文献检索