• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相似文献

1
Physician vs. AI-generated messages in urology: evaluation of accuracy, completeness, and preference by patients and physicians.泌尿外科中医生与人工智能生成的信息对比:患者和医生对准确性、完整性及偏好的评估
World J Urol. 2024 Dec 27;43(1):48. doi: 10.1007/s00345-024-05399-y.
2
Quality of Chatbot Information Related to Benign Prostatic Hyperplasia.与良性前列腺增生相关的聊天机器人信息质量
Prostate. 2025 Feb;85(2):175-180. doi: 10.1002/pros.24814. Epub 2024 Nov 8.
3
Comparing Physician and Artificial Intelligence Chatbot Responses to Patient Questions Posted to a Public Social Media Forum.比较医生和人工智能聊天机器人对发布在公共社交媒体论坛上的患者问题的回复。
JAMA Intern Med. 2023 Jun 1;183(6):589-596. doi: 10.1001/jamainternmed.2023.1838.
4
Accuracy and Readability of Artificial Intelligence Chatbot Responses to Vasectomy-Related Questions: Public Beware.人工智能聊天机器人对输精管切除术相关问题回答的准确性和可读性:公众需谨慎。
Cureus. 2024 Aug 28;16(8):e67996. doi: 10.7759/cureus.67996. eCollection 2024 Aug.
5
Accuracy of Prospective Assessments of 4 Large Language Model Chatbot Responses to Patient Questions About Emergency Care: Experimental Comparative Study.前瞻性评估 4 种大型语言模型聊天机器人对患者关于急救护理问题的回答的准确性:实验性对比研究。
J Med Internet Res. 2024 Nov 4;26:e60291. doi: 10.2196/60291.
6
Assessing the Accuracy of Information on Medication Abortion: A Comparative Analysis of ChatGPT and Google Bard AI.评估药物流产信息的准确性:ChatGPT与谷歌巴德人工智能的比较分析
Cureus. 2024 Jan 2;16(1):e51544. doi: 10.7759/cureus.51544. eCollection 2024 Jan.
7
Physician and Artificial Intelligence Chatbot Responses to Cancer Questions From Social Media.医生与人工智能聊天机器人对社交媒体上癌症问题的回复。
JAMA Oncol. 2024 Jul 1;10(7):956-960. doi: 10.1001/jamaoncol.2024.0836.
8
Accuracy and Reliability of Chatbot Responses to Physician Questions.聊天机器人对医生提问回答的准确性和可靠性。
JAMA Netw Open. 2023 Oct 2;6(10):e2336483. doi: 10.1001/jamanetworkopen.2023.36483.
9
Performance of ChatGPT-4 and Bard chatbots in responding to common patient questions on prostate cancer Lu-PSMA-617 therapy.ChatGPT-4和Bard聊天机器人在回答关于前列腺癌Lu-PSMA-617疗法常见患者问题方面的表现
Front Oncol. 2024 Jul 12;14:1386718. doi: 10.3389/fonc.2024.1386718. eCollection 2024.
10
Doctor Versus Artificial Intelligence: Patient and Physician Evaluation of Large Language Model Responses to Rheumatology Patient Questions in a Cross-Sectional Study.医生与人工智能的较量:横断面研究中对大语言模型回答风湿病患者问题的患者和医生评估。
Arthritis Rheumatol. 2024 Mar;76(3):479-484. doi: 10.1002/art.42737. Epub 2024 Jan 18.

引用本文的文献

1
[AI-enabled clinical decision support systems: challenges and opportunities].[人工智能驱动的临床决策支持系统:挑战与机遇]
Bundesgesundheitsblatt Gesundheitsforschung Gesundheitsschutz. 2025 Jun 25. doi: 10.1007/s00103-025-04092-8.
2
Development and external validation of a nomogram for predicting sepsis following flexible ureteroscopy.预测输尿管软镜检查后脓毒症的列线图的开发与外部验证
Eur J Med Res. 2025 Jun 13;30(1):479. doi: 10.1186/s40001-025-02754-6.
3
Optimizing AI-assisted communication in urology: potential and challenges.优化泌尿外科中人工智能辅助通信:潜力与挑战。
World J Urol. 2025 Feb 14;43(1):122. doi: 10.1007/s00345-025-05508-5.
4
Comment on "Physician vs. AI-generated messages in urology: evaluation of accuracy, completeness, and preference by patients and physicians".关于《泌尿外科中医生与人工智能生成的信息:患者和医生对准确性、完整性及偏好的评估》的评论
World J Urol. 2025 Jan 22;43(1):83. doi: 10.1007/s00345-025-05448-0.

本文引用的文献

1
Transitioning from "Dr. Google" to "Dr. ChatGPT": the advent of artificial intelligence chatbots.从“谷歌医生”到“ChatGPT医生”:人工智能聊天机器人的出现
Transl Androl Urol. 2024 Jun 30;13(6):1067-1070. doi: 10.21037/tau-23-629. Epub 2024 May 23.
2
Patient Perceptions of Chatbot Supervision in Health Care Settings.患者对医疗环境中聊天机器人监管的认知
JAMA Netw Open. 2024 Apr 1;7(4):e248833. doi: 10.1001/jamanetworkopen.2024.8833.
3
Artificial Intelligence-Generated Draft Replies to Patient Inbox Messages.人工智能生成的回复患者收件箱消息草稿。
JAMA Netw Open. 2024 Mar 4;7(3):e243201. doi: 10.1001/jamanetworkopen.2024.3201.
4
The efficacy of artificial intelligence in urology: a detailed analysis of kidney stone-related queries.人工智能在泌尿科的疗效:肾结石相关查询的详细分析。
World J Urol. 2024 Mar 14;42(1):158. doi: 10.1007/s00345-024-04847-z.
5
AI Chatbots and Challenges of HIPAA Compliance for AI Developers and Vendors.人工智能聊天机器人与 HIPAA 合规性对人工智能开发者和供应商的挑战。
J Law Med Ethics. 2023;51(4):988-995. doi: 10.1017/jme.2024.15. Epub 2024 Mar 13.
6
Usefulness and Accuracy of Artificial Intelligence Chatbot Responses to Patient Questions for Neurosurgical Procedures.人工智能聊天机器人对神经外科手术患者问题回答的实用性和准确性
Neurosurgery. 2024 Feb 14. doi: 10.1227/neu.0000000000002856.
7
Awareness and Use of ChatGPT and Large Language Models: A Prospective Cross-sectional Global Survey in Urology.泌尿外科中关于 ChatGPT 和大型语言模型的认知和使用:一项前瞻性的全球横断面调查。
Eur Urol. 2024 Feb;85(2):146-153. doi: 10.1016/j.eururo.2023.10.014. Epub 2023 Nov 4.
8
Patients' Trust in Artificial Intelligence-based Decision-making for Localized Prostate Cancer: Results from a Prospective Trial.患者对基于人工智能的局部前列腺癌决策的信任:一项前瞻性试验的结果。
Eur Urol Focus. 2024 Jul;10(4):654-661. doi: 10.1016/j.euf.2023.10.020. Epub 2023 Nov 1.
9
Accuracy and Reliability of Chatbot Responses to Physician Questions.聊天机器人对医生提问回答的准确性和可靠性。
JAMA Netw Open. 2023 Oct 2;6(10):e2336483. doi: 10.1001/jamanetworkopen.2023.36483.
10
Changes in patient perceptions regarding ChatGPT-written explanations on lifestyle modifications for preventing urolithiasis recurrence.患者对ChatGPT编写的关于预防尿路结石复发的生活方式改变解释的认知变化。
Digit Health. 2023 Sep 28;9:20552076231203940. doi: 10.1177/20552076231203940. eCollection 2023 Jan-Dec.

泌尿外科中医生与人工智能生成的信息对比:患者和医生对准确性、完整性及偏好的评估

Physician vs. AI-generated messages in urology: evaluation of accuracy, completeness, and preference by patients and physicians.

作者信息

Robinson Eric J, Qiu Chunyuan, Sands Stuart, Khan Mohammad, Vora Shivang, Oshima Kenichiro, Nguyen Khang, DiFronzo L Andrew, Rhew David, Feng Mark I

机构信息

Department of Urology, Los Angeles Medical Center, Kaiser Permanente, Los Angeles, CA, USA.

Department of Anesthesiology, Baldwin Park Medical Center, Kaiser Permanente, Baldwin Park, CA, USA.

出版信息

World J Urol. 2024 Dec 27;43(1):48. doi: 10.1007/s00345-024-05399-y.

DOI:10.1007/s00345-024-05399-y
PMID:39729119
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11680670/
Abstract

PURPOSE

To evaluate the accuracy, comprehensiveness, empathetic tone, and patient preference for AI and urologist responses to patient messages concerning common BPH questions across phases of care.

METHODS

Cross-sectional study evaluating responses to 20 BPH-related questions generated by 2 AI chatbots and 4 urologists in a simulated clinical messaging environment without direct patient interaction. Accuracy, completeness, and empathetic tone of responses assessed by experts using Likert scales, and preferences and perceptions of authorship (chatbot vs. human) rated by non-medical evaluators.

RESULTS

Five non-medical volunteers independently evaluated, ranked, and inferred the source for 120 responses (n = 600 total). For volunteer evaluations, the mean (SD) score of chatbots, 3.0 (1.4) (moderately empathetic) was significantly higher than urologists, 2.1 (1.1) (slightly empathetic) (p < 0.001); mean (SD) and preference ranking for chatbots, 2.6 (1.6), was significantly higher than urologist ranking, 3.9 (1.6) (p < 0.001). Two subject matter experts (SMEs) independently evaluated 120 responses each (answers to 20 questions from 4 urologist and 2 chatbots, n = 240 total). For SME evaluations, mean (SD) accuracy score for chatbots was 4.5 (1.1) (nearly all correct) and not significantly different than urologists, 4.6 (1.2). The mean (SD) completeness score for chatbots was 2.4 (0.8) (comprehensive), significantly higher than urologists, 1.6 (0.6) (adequate) (p < 0.001).

CONCLUSION

Answers to patient BPH messages generated by chatbots were evaluated by experts as equally accurate and more complete than urologist answers. Non-medical volunteers preferred chatbot-generated messages and considered them more empathetic compared to answers generated by urologists.

摘要

目的

评估人工智能(AI)和泌尿科医生针对患者关于良性前列腺增生(BPH)常见问题的信息回复在准确性、全面性、共情语气以及患者偏好方面的表现,涵盖护理的各个阶段。

方法

横断面研究,在模拟临床信息交流环境中,评估由2个AI聊天机器人和4名泌尿科医生针对20个与BPH相关问题的回复,无直接患者互动。专家使用李克特量表评估回复的准确性、完整性和共情语气,非医学评估人员对回复的来源偏好(聊天机器人与人类)及认知进行评分。

结果

5名非医学志愿者独立对120条回复(共600条)进行评估、排序并推断来源。对于志愿者评估,聊天机器人的平均(标准差)得分为3.0(1.4)(具有中等共情),显著高于泌尿科医生的2.1(1.1)(具有轻微共情)(p < 0.001);聊天机器人的平均(标准差)得分及偏好排名为2.6(1.6),显著高于泌尿科医生的排名3.9(1.6)(p < 0.001)。两名主题专家(SMEs)分别独立评估120条回复(来自4名泌尿科医生和2个聊天机器人对20个问题的回答,共240条)。对于SME评估,聊天机器人的平均(标准差)准确性得分为4.5(1.1)(几乎全部正确),与泌尿科医生的4.6(1.2)无显著差异。聊天机器人的平均(标准差)完整性得分为2.4(0.8)(全面),显著高于泌尿科医生的1.6(0.6)(足够)(p < 0.001)。

结论

专家评估认为,聊天机器人生成的患者BPH信息回复与泌尿科医生的回复准确性相当且更完整。非医学志愿者更偏好聊天机器人生成的信息,并认为其比泌尿科医生生成的回复更具共情性。