Branum Candise, Schiavenato Martin
Health Science Librarian and Assistant Professor (Mx Branum), Foley Center Library, and Assistant Professor (Dr Schiavenato), School of Nursing and Human Physiology, Gonzaga University, Spokane, Washington.
Nurse Educ. 2023;48(5):231-233. doi: 10.1097/NNE.0000000000001436. Epub 2023 Apr 28.
BACKGROUND: ChatGPT, an artificial intelligence (AI) text generator trained to predict correct words, can provide answers to questions but has shown mixed results in answering medical questions. PURPOSE: To assess the reliability and accuracy of ChatGPT in providing answers to a complex clinical question. METHODS: A Population, Intervention, Comparison, Outcome, and Time (PICOT) formatted question was queried, along with a request for references. Full-text articles were reviewed to verify the accuracy of the evidence summary provided by the chatbot. RESULTS: ChatGPT was unable to provide a certifiable response to a PICOT question. The references cited as evidence included incorrect journal information, and many study details summarized by ChatGPT proved to be patently false, including providing fabricated data. CONCLUSIONS: ChatGPT provides answers that appear legitimate but may be factually incorrect. The system is not transparent in how it gathers data to answer questions and sometimes fabricates information that looks plausible, making it an unreliable tool for clinical questions.
背景:ChatGPT是一种经过训练以预测正确单词的人工智能(AI)文本生成器,它可以回答问题,但在回答医学问题时结果参差不齐。 目的:评估ChatGPT在回答复杂临床问题时的可靠性和准确性。 方法:查询了一个采用人群、干预措施、对照、结局和时间(PICOT)格式的问题,并要求提供参考文献。对全文进行了审查,以验证聊天机器人提供的证据总结的准确性。 结果:ChatGPT无法对PICOT问题提供可认证的回答。作为证据引用的参考文献包含错误的期刊信息,ChatGPT总结的许多研究细节被证明明显是错误的,包括提供伪造的数据。 结论:ChatGPT提供的答案看似合理,但可能与事实不符。该系统在收集数据以回答问题的方式上不透明,有时会编造看似合理的信息,使其成为临床问题的不可靠工具。
Clin Orthop Relat Res. 2024-12-1
Ann Med Surg (Lond). 2024-11-8
J Med Internet Res. 2024-6-25