Department of Internal Medicine, Henry Ford Hospital, Detroit, Michigan, USA.
Division of Gastroenterology and Hepatology, Henry Ford Hospital, Detroit, Michigan, USA.
Clin Transl Gastroenterol. 2024 Nov 1;15(11):e00765. doi: 10.14309/ctg.0000000000000765.
The advent of artificial intelligence-powered large language models capable of generating interactive responses to intricate queries marks a groundbreaking development in how patients access medical information. Our aim was to evaluate the appropriateness and readability of gastroenterological information generated by Chat Generative Pretrained Transformer (ChatGPT).
We analyzed responses generated by ChatGPT to 16 dialog-based queries assessing symptoms and treatments for gastrointestinal conditions and 13 definition-based queries on prevalent topics in gastroenterology. Three board-certified gastroenterologists evaluated output appropriateness with a 5-point Likert-scale proxy measurement of currency, relevance, accuracy, comprehensiveness, clarity, and urgency/next steps. Outputs with a score of 4 or 5 in all 6 categories were designated as "appropriate." Output readability was assessed with Flesch Reading Ease score, Flesch-Kinkaid Reading Level, and Simple Measure of Gobbledygook scores.
ChatGPT responses to 44% of the 16 dialog-based and 69% of the 13 definition-based questions were deemed appropriate, and the proportion of appropriate responses within the 2 groups of questions was not significantly different ( P = 0.17). Notably, none of ChatGPT's responses to questions related to gastrointestinal emergencies were designated appropriate. The mean readability scores showed that outputs were written at a college-level reading proficiency.
ChatGPT can produce generally fitting responses to gastroenterological medical queries, but responses were constrained in appropriateness and readability, which limits the current utility of this large language model. Substantial development is essential before these models can be unequivocally endorsed as reliable sources of medical information.
人工智能驱动的大型语言模型能够生成复杂查询的交互响应,这标志着患者获取医学信息的方式发生了突破性的发展。我们的目的是评估 ChatGPT 生成的胃肠病学信息的适当性和可读性。
我们分析了 ChatGPT 对 16 个基于对话的查询的响应,这些查询评估了胃肠道疾病的症状和治疗方法,以及 13 个基于定义的常见胃肠病学主题的查询。三位经过董事会认证的胃肠病学家使用 5 分李克特量表对当前、相关性、准确性、全面性、清晰度和紧迫性/下一步措施进行代理测量,评估输出的适当性。在所有 6 个类别中得分为 4 或 5 的输出被指定为“适当”。输出的可读性使用 Flesch 阅读容易度得分、Flesch-Kincaid 阅读水平和简单测词得分来评估。
ChatGPT 对 16 个基于对话的问题中的 44%和 13 个基于定义的问题中的 69%的回答被认为是适当的,这两组问题中适当回答的比例没有显著差异(P = 0.17)。值得注意的是,ChatGPT 对与胃肠道急症相关的问题的回答没有一个被指定为适当。平均可读性得分表明输出是按照大学阅读水平编写的。
ChatGPT 可以对胃肠病学医学查询生成大致合适的响应,但响应在适当性和可读性方面受到限制,这限制了该大型语言模型的当前实用性。在这些模型可以被明确认可为可靠的医学信息来源之前,还需要进行大量的开发。