Suppr超能文献

评估 ChatGPT 在药学实践中回答临床问题的工具。

Evaluation of ChatGPT as a Tool for Answering Clinical Questions in Pharmacy Practice.

机构信息

University of Illinois Chicago College of Pharmacy, Chicago, IL, USA.

College of Pharmacy, The Ohio State University, Columbus, OH, USA.

出版信息

J Pharm Pract. 2024 Dec;37(6):1303-1310. doi: 10.1177/08971900241256731. Epub 2024 May 22.

Abstract

In the healthcare field, there has been a growing interest in using artificial intelligence (AI)-powered tools to assist healthcare professionals, including pharmacists, in their daily tasks. To provide commentary and insight into the potential for generative AI language models such as ChatGPT as a tool for answering practice-based, clinical questions and the challenges that need to be addressed before implementation in pharmacy practice settings. To assess ChatGPT, pharmacy-based questions were prompted to ChatGPT (Version 3.5; free version) and responses were recorded. Question types included 6 drug information questions, 6 enhanced prompt drug information questions, 5 patient case questions, 5 calculations questions, and 10 drug knowledge questions (e.g., top 200 drugs). After all responses were collected, ChatGPT responses were assessed for appropriateness. ChatGPT responses were generated from 32 questions in 5 categories and evaluated on a total of 44 possible points. Among all ChatGPT responses and categories, the overall score was 21 of 44 points (47.73%). ChatGPT scored higher in pharmacy calculation (100%), drug information (83%), and top 200 drugs (80%) categories and lower in drug information enhanced prompt (33%) and patient case (20%) categories. This study suggests that ChatGPT has limited success as a tool to answer pharmacy-based questions. ChatGPT scored higher in calculation and multiple-choice questions but scored lower in drug information and patient case questions, generating misleading or fictional answers and citations.

摘要

在医疗保健领域,人们越来越感兴趣地使用人工智能 (AI) 驱动的工具来协助医疗保健专业人员,包括药剂师,完成他们的日常任务。本文旨在探讨生成式 AI 语言模型(如 ChatGPT)作为回答基于实践、临床问题的工具的潜力,以及在将其应用于药剂实践环境之前需要解决的挑战。为了评估 ChatGPT,向 ChatGPT(版本 3.5;免费版)提出了基于药剂的问题,并记录了回复。问题类型包括 6 个药物信息问题、6 个增强提示药物信息问题、5 个患者病例问题、5 个计算问题和 10 个药物知识问题(例如,前 200 种药物)。在收集完所有回复后,评估了 ChatGPT 回复的恰当性。ChatGPT 回复是从 5 个类别中的 32 个问题中生成的,并在总共 44 个可能的点上进行了评估。在所有 ChatGPT 回复和类别中,总分为 44 分中的 21 分(47.73%)。ChatGPT 在药剂计算(100%)、药物信息(83%)和前 200 种药物(80%)类别中得分较高,在药物信息增强提示(33%)和患者病例(20%)类别中得分较低。本研究表明,ChatGPT 在回答基于药剂的问题方面的成功率有限。ChatGPT 在计算和多项选择题中得分较高,但在药物信息和患者病例问题中得分较低,生成的答案和引文具有误导性或虚构性。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验