• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

消化数字健康:ChatGPT 生成的胃肠病学信息适宜性和可读性的研究。

Digesting Digital Health: A Study of Appropriateness and Readability of ChatGPT-Generated Gastroenterological Information.

机构信息

Department of Internal Medicine, Henry Ford Hospital, Detroit, Michigan, USA.

Division of Gastroenterology and Hepatology, Henry Ford Hospital, Detroit, Michigan, USA.

出版信息

Clin Transl Gastroenterol. 2024 Nov 1;15(11):e00765. doi: 10.14309/ctg.0000000000000765.

DOI:10.14309/ctg.0000000000000765
PMID:39212302
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11596446/
Abstract

INTRODUCTION

The advent of artificial intelligence-powered large language models capable of generating interactive responses to intricate queries marks a groundbreaking development in how patients access medical information. Our aim was to evaluate the appropriateness and readability of gastroenterological information generated by Chat Generative Pretrained Transformer (ChatGPT).

METHODS

We analyzed responses generated by ChatGPT to 16 dialog-based queries assessing symptoms and treatments for gastrointestinal conditions and 13 definition-based queries on prevalent topics in gastroenterology. Three board-certified gastroenterologists evaluated output appropriateness with a 5-point Likert-scale proxy measurement of currency, relevance, accuracy, comprehensiveness, clarity, and urgency/next steps. Outputs with a score of 4 or 5 in all 6 categories were designated as "appropriate." Output readability was assessed with Flesch Reading Ease score, Flesch-Kinkaid Reading Level, and Simple Measure of Gobbledygook scores.

RESULTS

ChatGPT responses to 44% of the 16 dialog-based and 69% of the 13 definition-based questions were deemed appropriate, and the proportion of appropriate responses within the 2 groups of questions was not significantly different ( P = 0.17). Notably, none of ChatGPT's responses to questions related to gastrointestinal emergencies were designated appropriate. The mean readability scores showed that outputs were written at a college-level reading proficiency.

DISCUSSION

ChatGPT can produce generally fitting responses to gastroenterological medical queries, but responses were constrained in appropriateness and readability, which limits the current utility of this large language model. Substantial development is essential before these models can be unequivocally endorsed as reliable sources of medical information.

摘要

简介

人工智能驱动的大型语言模型能够生成复杂查询的交互响应,这标志着患者获取医学信息的方式发生了突破性的发展。我们的目的是评估 ChatGPT 生成的胃肠病学信息的适当性和可读性。

方法

我们分析了 ChatGPT 对 16 个基于对话的查询的响应,这些查询评估了胃肠道疾病的症状和治疗方法,以及 13 个基于定义的常见胃肠病学主题的查询。三位经过董事会认证的胃肠病学家使用 5 分李克特量表对当前、相关性、准确性、全面性、清晰度和紧迫性/下一步措施进行代理测量,评估输出的适当性。在所有 6 个类别中得分为 4 或 5 的输出被指定为“适当”。输出的可读性使用 Flesch 阅读容易度得分、Flesch-Kincaid 阅读水平和简单测词得分来评估。

结果

ChatGPT 对 16 个基于对话的问题中的 44%和 13 个基于定义的问题中的 69%的回答被认为是适当的,这两组问题中适当回答的比例没有显著差异(P = 0.17)。值得注意的是,ChatGPT 对与胃肠道急症相关的问题的回答没有一个被指定为适当。平均可读性得分表明输出是按照大学阅读水平编写的。

讨论

ChatGPT 可以对胃肠病学医学查询生成大致合适的响应,但响应在适当性和可读性方面受到限制,这限制了该大型语言模型的当前实用性。在这些模型可以被明确认可为可靠的医学信息来源之前,还需要进行大量的开发。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f861/11596446/8b3523990186/ct9-15-e00765-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f861/11596446/8b3523990186/ct9-15-e00765-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f861/11596446/8b3523990186/ct9-15-e00765-g001.jpg

相似文献

1
Digesting Digital Health: A Study of Appropriateness and Readability of ChatGPT-Generated Gastroenterological Information.消化数字健康:ChatGPT 生成的胃肠病学信息适宜性和可读性的研究。
Clin Transl Gastroenterol. 2024 Nov 1;15(11):e00765. doi: 10.14309/ctg.0000000000000765.
2
Assessing the Quality and Reliability of ChatGPT's Responses to Radiotherapy-Related Patient Queries: Comparative Study With GPT-3.5 and GPT-4.评估ChatGPT对放疗相关患者问题回答的质量和可靠性:与GPT-3.5和GPT-4的比较研究
JMIR Cancer. 2025 Apr 16;11:e63677. doi: 10.2196/63677.
3
Appropriateness and readability of Google Bard and ChatGPT-3.5 generated responses for surgical treatment of glaucoma.谷歌巴德和 ChatGPT-3.5 生成的青光眼手术治疗回复的适宜性和可读性。
Rom J Ophthalmol. 2024 Jul-Sep;68(3):243-248. doi: 10.22336/rjo.2024.45.
4
Assessing the Clinical Appropriateness and Practical Utility of ChatGPT as an Educational Resource for Patients Considering Minimally Invasive Spine Surgery.评估ChatGPT作为考虑微创脊柱手术患者的教育资源的临床适用性和实际效用。
Cureus. 2024 Oct 8;16(10):e71105. doi: 10.7759/cureus.71105. eCollection 2024 Oct.
5
Both Patients and Plastic Surgeons Prefer Artificial Intelligence-Generated Microsurgical Information.患者和整形外科医生都更喜欢人工智能生成的显微手术信息。
J Reconstr Microsurg. 2024 Nov;40(9):657-664. doi: 10.1055/a-2273-4163. Epub 2024 Feb 21.
6
Assessing the Responses of Large Language Models (ChatGPT-4, Claude 3, Gemini, and Microsoft Copilot) to Frequently Asked Questions in Retinopathy of Prematurity: A Study on Readability and Appropriateness.评估大型语言模型(ChatGPT-4、Claude 3、Gemini和Microsoft Copilot)对早产儿视网膜病变常见问题的回答:一项关于可读性和适宜性的研究
J Pediatr Ophthalmol Strabismus. 2025 Mar-Apr;62(2):84-95. doi: 10.3928/01913913-20240911-05. Epub 2024 Oct 28.
7
Evaluating the Efficacy of ChatGPT as a Patient Education Tool in Prostate Cancer: Multimetric Assessment.评估 ChatGPT 在前列腺癌患者教育中的疗效:多指标评估。
J Med Internet Res. 2024 Aug 14;26:e55939. doi: 10.2196/55939.
8
Evaluating the Effectiveness of Artificial Intelligence-powered Large Language Models Application in Disseminating Appropriate and Readable Health Information in Urology.评估人工智能驱动的大型语言模型在泌尿外科传播恰当且易读的健康信息方面的有效性。
J Urol. 2023 Oct;210(4):688-694. doi: 10.1097/JU.0000000000003615. Epub 2023 Jul 10.
9
Generative artificial intelligence chatbots may provide appropriate informational responses to common vascular surgery questions by patients.生成式人工智能聊天机器人可能会为患者关于常见血管外科问题提供恰当的信息性回复。
Vascular. 2025 Feb;33(1):229-237. doi: 10.1177/17085381241240550. Epub 2024 Mar 18.
10
ChatGPT vs. web search for patient questions: what does ChatGPT do better?ChatGPT 与网页搜索在解答患者问题上的对比:ChatGPT 有哪些优势?
Eur Arch Otorhinolaryngol. 2024 Jun;281(6):3219-3225. doi: 10.1007/s00405-024-08524-0. Epub 2024 Feb 28.

引用本文的文献

1
Perceptions and Attitudes of Chinese Oncologists Toward Endorsing AI-Driven Chatbots for Health Information Seeking Among Patients with Cancer: Phenomenological Qualitative Study.中国肿瘤学家对认可人工智能驱动的聊天机器人用于癌症患者健康信息查询的认知与态度:现象学定性研究
J Med Internet Res. 2025 Jul 23;27:e71418. doi: 10.2196/71418.
2
The Potential Clinical Utility of the Customized Large Language Model in Gastroenterology: A Pilot Study.定制大语言模型在胃肠病学中的潜在临床应用:一项初步研究。
Bioengineering (Basel). 2024 Dec 24;12(1):1. doi: 10.3390/bioengineering12010001.