文献检索文档翻译深度研究
Suppr Zotero 插件Zotero 插件
邀请有礼套餐&价格历史记录

新学期,新优惠

限时优惠:9月1日-9月22日

30天高级会员仅需29元

1天体验卡首发特惠仅需5.99元

了解详情
不再提醒
插件&应用
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
高级版
套餐订阅购买积分包
AI 工具
文献检索文档翻译深度研究
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2025

将人工智能整合到肾细胞癌中:评估ChatGPT在教育患者和受训人员方面的表现。

Integrating artificial intelligence in renal cell carcinoma: evaluating ChatGPT's performance in educating patients and trainees.

作者信息

Mershon J Patrick, Posid Tasha, Salari Keyan, Matulewicz Richard S, Singer Eric A, Dason Shawn

机构信息

Division of Urologic Oncology, The Ohio State University Comprehensive Cancer Center, Columbus, OH, USA.

Department of Urology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA.

出版信息

Transl Cancer Res. 2024 Nov 30;13(11):6246-6254. doi: 10.21037/tcr-23-2234. Epub 2024 May 21.


DOI:10.21037/tcr-23-2234
PMID:39697745
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11651803/
Abstract

BACKGROUND: OpenAI's ChatGPT is a large language model-based artificial intelligence (AI) chatbot that can be used to answer unique, user-generated questions without direct training on specific content. Large language models have significant potential in urologic education. We reviewed the primary data surrounding the use of large language models in urology. We also reported findings of our primary study assessing the performance of ChatGPT in renal cell carcinoma (RCC) education. METHODS: For our primary study, we utilized three professional society guidelines addressing RCC to generate fifteen content questions. These questions were inputted into ChatGPT 3.5. ChatGPT responses along with pre- and post-content assessment questions regarding ChatGPT were then presented to evaluators. Evaluators consisted of four urologic oncologists and four non-clinical staff members. Medline was reviewed for additional studies pertaining to the use of ChatGPT in urologic education. RESULTS: We found that all assessors rated ChatGPT highly on the accuracy and usefulness of information provided with overall mean scores of 3.64 [±0.62 standard deviation (SD)] and 3.58 (±0.75) out of 5, respectively. Clinicians and non-clinicians did not differ in their scoring of responses (P=0.37). Completing content assessment improved confidence in the accuracy of ChatGPT's information (P=0.01) and increased agreement that it should be used for medical education (P=0.007). Attitudes towards use for patient education did not change (P=0.30). We also review the current state of the literature regarding ChatGPT use for patient and trainee education and discuss future steps towards optimization. CONCLUSIONS: ChatGPT has significant potential utility in medical education if it can continue to provide accurate and useful information. We have found it to be a useful adjunct to expert human guidance both for medical trainee and, less so, for patient education. Further work is needed to validate ChatGPT before widespread adoption.

摘要

背景:OpenAI的ChatGPT是一种基于大语言模型的人工智能(AI)聊天机器人,可用于回答用户提出的独特问题,而无需针对特定内容进行直接训练。大语言模型在泌尿外科教育中具有巨大潜力。我们回顾了围绕大语言模型在泌尿外科应用的主要数据。我们还报告了我们的初步研究结果,该研究评估了ChatGPT在肾细胞癌(RCC)教育中的表现。 方法:在我们的初步研究中,我们利用了三项关于RCC的专业学会指南来生成15个内容问题。这些问题被输入到ChatGPT 3.5中。然后将ChatGPT的回答以及关于ChatGPT的内容前和内容后评估问题呈现给评估人员。评估人员包括四名泌尿外科肿瘤学家和四名非临床工作人员。对Medline进行了检索,以查找有关ChatGPT在泌尿外科教育中应用的其他研究。 结果:我们发现,所有评估人员对ChatGPT提供信息的准确性和有用性评价都很高,总体平均得分分别为3.64[±0.62标准差(SD)]和3.58(±0.75)(满分5分)。临床医生和非临床医生对回答的评分没有差异(P=0.37)。完成内容评估提高了对ChatGPT信息准确性的信心(P=0.01),并增加了对其应用于医学教育的认可度(P=0.007)。对其用于患者教育的态度没有改变(P=0.30)。我们还回顾了关于ChatGPT用于患者和学员教育的文献现状,并讨论了未来优化的步骤。 结论:如果ChatGPT能够继续提供准确和有用的信息,它在医学教育中具有巨大的潜在效用。我们发现它对于医学实习生以及在较小程度上对于患者教育而言,是专家人工指导的有用辅助工具。在广泛采用之前,需要进一步开展工作来验证ChatGPT。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/48f9/11651803/54d064446978/tcr-13-11-6246-f2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/48f9/11651803/6a3d11dbeb87/tcr-13-11-6246-f1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/48f9/11651803/54d064446978/tcr-13-11-6246-f2.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/48f9/11651803/6a3d11dbeb87/tcr-13-11-6246-f1.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/48f9/11651803/54d064446978/tcr-13-11-6246-f2.jpg

相似文献

[1]
Integrating artificial intelligence in renal cell carcinoma: evaluating ChatGPT's performance in educating patients and trainees.

Transl Cancer Res. 2024-11-30

[2]
Application of Large Language Models in Medical Training Evaluation-Using ChatGPT as a Standardized Patient: Multimetric Assessment.

J Med Internet Res. 2025-1-1

[3]
Performance of ChatGPT on the Chinese Postgraduate Examination for Clinical Medicine: Survey Study.

JMIR Med Educ. 2024-2-9

[4]
Optimizing ChatGPT's Interpretation and Reporting of Delirium Assessment Outcomes: Exploratory Study.

JMIR Form Res. 2024-10-1

[5]
ChatGPT's performance in German OB/GYN exams - paving the way for AI-enhanced medical education and clinical practice.

Front Med (Lausanne). 2023-12-13

[6]
Generative artificial intelligence chatbots may provide appropriate informational responses to common vascular surgery questions by patients.

Vascular. 2025-2

[7]
Evaluating the Potential of Large Language Models for Vestibular Rehabilitation Education: A Comparison of ChatGPT, Google Gemini, and Clinicians.

Phys Ther. 2025-4-2

[8]
How Does ChatGPT Perform on the United States Medical Licensing Examination (USMLE)? The Implications of Large Language Models for Medical Education and Knowledge Assessment.

JMIR Med Educ. 2023-2-8

[9]
Evaluating the Current Ability of ChatGPT to Assist in Professional Otolaryngology Education.

OTO Open. 2023-11-22

[10]
Enhanced Artificial Intelligence Strategies in Renal Oncology: Iterative Optimization and Comparative Analysis of GPT 3.5 Versus 4.0.

Ann Surg Oncol. 2024-6

引用本文的文献

[1]
[AI-enabled clinical decision support systems: challenges and opportunities].

Bundesgesundheitsblatt Gesundheitsforschung Gesundheitsschutz. 2025-6-25

[2]
Artificial intelligence and patient education.

Curr Opin Urol. 2025-5-1

本文引用的文献

[1]
Artificial Intelligence Versus Expert Plastic Surgeon: Comparative Study Shows ChatGPT "Wins" Rhinoplasty Consultations: Should We Be Worried?

Facial Plast Surg Aesthet Med. 2024

[2]
ChatGPT Interactive Medical Simulations for Early Clinical Education: Case Study.

JMIR Med Educ. 2023-11-10

[3]
Awareness and Use of ChatGPT and Large Language Models: A Prospective Cross-sectional Global Survey in Urology.

Eur Urol. 2024-2

[4]
Comparison of ChatGPT and Traditional Patient Education Materials for Men's Health.

Urol Pract. 2024-1

[5]
Integrating ChatGPT in Medical Education: Adapting Curricula to Cultivate Competent Physicians for the AI Era.

Cureus. 2023-8-6

[6]
ChatGPT and Generative Artificial Intelligence for Medical Education: Potential Impact and Opportunity.

Acad Med. 2024-1-1

[7]
Emergence of artificial generative intelligence and its potential impact on urology.

Can J Urol. 2023-8

[8]
ChatGPT and most frequent urological diseases: analysing the quality of information and potential risks for patients.

World J Urol. 2023-11

[9]
Evaluating the performance of ChatGPT in answering questions related to pediatric urology.

J Pediatr Urol. 2024-2

[10]
How Well Do Artificial Intelligence Chatbots Respond to the Top Search Queries About Urological Malignancies?

Eur Urol. 2024-1

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

推荐工具

医学文档翻译智能文献检索