• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

患者教育的未来:一项关于人工智能驱动的尿失禁咨询应答的研究。

The future of patient education: A study on AI-driven responses to urinary incontinence inquiries.

机构信息

Department of Urogynaecology, Cork University Maternity Hospital, Cork, Ireland.

Department of Obstetrics and Gynecology, Shaare Zedek Medical Center, Affiliated with the Hebrew University School of Medicine, Jerusalem, Israel.

出版信息

Int J Gynaecol Obstet. 2024 Dec;167(3):1004-1009. doi: 10.1002/ijgo.15751. Epub 2024 Jun 30.

DOI:10.1002/ijgo.15751
PMID:38944693
Abstract

OBJECTIVE

To evaluate the effectiveness of ChatGPT in providing insights into common urinary incontinence concerns within urogynecology. By analyzing the model's responses against established benchmarks of accuracy, completeness, and safety, the study aimed to quantify its usefulness for informing patients and aiding healthcare providers.

METHODS

An expert-driven questionnaire was developed, inviting urogynecologists worldwide to assess ChatGPT's answers to 10 carefully selected questions on urinary incontinence (UI). These assessments focused on the accuracy of the responses, their comprehensiveness, and whether they raised any safety issues. Subsequent statistical analyses determined the average consensus among experts and identified the proportion of responses receiving favorable evaluations (a score of 4 or higher).

RESULTS

Of 50 urogynecologists that were approached worldwide, 37 responded, offering insights into ChatGPT's responses on UI. The overall feedback averaged a score of 4.0, indicating a positive acceptance. Accuracy scores averaged 3.9 with 71% rated favorably, whereas comprehensiveness scored slightly higher at 4 with 74% favorable ratings. Safety assessments also averaged 4 with 74% favorable responses.

CONCLUSION

This investigation underlines ChatGPT's favorable performance across the evaluated domains of accuracy, comprehensiveness, and safety within the context of UI queries. However, despite this broadly positive reception, the study also signals a clear avenue for improvement, particularly in the precision of the provided information. Refining ChatGPT's accuracy and ensuring the delivery of more pinpointed responses are essential steps forward, aiming to bolster its utility as a comprehensive educational resource for patients and a supportive tool for healthcare practitioners.

摘要

目的

评估 ChatGPT 在提供尿失禁相关问题的见解方面在泌尿妇科领域的有效性。通过分析模型的回答与准确性、完整性和安全性的既定基准进行比较,该研究旨在量化其在为患者提供信息和帮助医疗保健提供者方面的有用性。

方法

开发了一份专家驱动的问卷,邀请全球的泌尿妇科医生评估 ChatGPT 对 10 个精心挑选的尿失禁问题的回答。这些评估集中在回答的准确性、全面性以及是否存在任何安全问题上。随后的统计分析确定了专家的平均共识,并确定了收到有利评价(评分 4 或更高)的回答比例。

结果

在全球范围内联系的 50 名泌尿妇科医生中,有 37 名做出了回应,提供了对 ChatGPT 回答尿失禁问题的见解。整体反馈平均评分为 4.0,表明了积极的接受度。准确性评分平均为 3.9,其中 71%的评分较好,而全面性评分略高,为 4,其中 74%的评分较好。安全性评估的平均评分为 4,其中 74%的回答较好。

结论

这项调查强调了 ChatGPT 在准确性、全面性和安全性方面的有利表现,符合 UI 查询的要求。然而,尽管这种广泛的接受度,研究也指出了一个明显的改进途径,特别是在提供信息的准确性方面。提高 ChatGPT 的准确性并确保提供更精确的回答是向前推进的重要步骤,旨在增强其作为患者全面教育资源和医疗保健从业者支持工具的实用性。

相似文献

1
The future of patient education: A study on AI-driven responses to urinary incontinence inquiries.患者教育的未来:一项关于人工智能驱动的尿失禁咨询应答的研究。
Int J Gynaecol Obstet. 2024 Dec;167(3):1004-1009. doi: 10.1002/ijgo.15751. Epub 2024 Jun 30.
2
Assessing question characteristic influences on ChatGPT's performance and response-explanation consistency: Insights from Taiwan's Nursing Licensing Exam.评估问题特征对 ChatGPT 表现和回应解释一致性的影响:来自台湾护理执照考试的见解。
Int J Nurs Stud. 2024 May;153:104717. doi: 10.1016/j.ijnurstu.2024.104717. Epub 2024 Feb 8.
3
ChatGPT's performance in German OB/GYN exams - paving the way for AI-enhanced medical education and clinical practice.ChatGPT在德国妇产科考试中的表现——为人工智能强化医学教育和临床实践铺平道路。
Front Med (Lausanne). 2023 Dec 13;10:1296615. doi: 10.3389/fmed.2023.1296615. eCollection 2023.
4
Performance of ChatGPT on the Chinese Postgraduate Examination for Clinical Medicine: Survey Study.ChatGPT 在临床医学研究生入学考试中的表现:调查研究。
JMIR Med Educ. 2024 Feb 9;10:e48514. doi: 10.2196/48514.
5
Assessing the Accuracy of Generative Conversational Artificial Intelligence in Debunking Sleep Health Myths: Mixed Methods Comparative Study With Expert Analysis.评估生成式对话人工智能在破除睡眠健康误区方面的准确性:采用专家分析的混合方法比较研究
JMIR Form Res. 2024 Apr 16;8:e55762. doi: 10.2196/55762.
6
Evaluating the validity of ChatGPT responses on common obstetric issues: Potential clinical applications and implications.评估 ChatGPT 对常见产科问题回答的有效性:潜在的临床应用及意义。
Int J Gynaecol Obstet. 2024 Sep;166(3):1127-1133. doi: 10.1002/ijgo.15501. Epub 2024 Mar 25.
7
AUA Guideline Committee Members Determine Quality of Artificial Intelligence‒Generated Responses for Female Stress Urinary Incontinence.AUA 指南委员会成员确定人工智能生成的女性压力性尿失禁应答的质量。
Urol Pract. 2024 Jul;11(4):693-698. doi: 10.1097/UPJ.0000000000000577. Epub 2024 May 8.
8
Evaluating ChatGPT to test its robustness as an interactive information database of radiation oncology and to assess its responses to common queries from radiotherapy patients: A single institution investigation.评估ChatGPT以测试其作为放射肿瘤学交互式信息数据库的稳健性,并评估其对放疗患者常见问题的回答:一项单机构调查。
Cancer Radiother. 2024 Jun;28(3):258-264. doi: 10.1016/j.canrad.2023.11.005. Epub 2024 Jun 12.
9
Assessing ChatGPT's Responses to Otolaryngology Patient Questions.评估 ChatGPT 对耳鼻喉科患者问题的回答。
Ann Otol Rhinol Laryngol. 2024 Jul;133(7):658-664. doi: 10.1177/00034894241249621. Epub 2024 Apr 27.
10
Evaluating the Efficacy of ChatGPT as a Patient Education Tool in Prostate Cancer: Multimetric Assessment.评估 ChatGPT 在前列腺癌患者教育中的疗效:多指标评估。
J Med Internet Res. 2024 Aug 14;26:e55939. doi: 10.2196/55939.

引用本文的文献

1
Large language models and women's health: a digital companion for informed decision-making.大语言模型与女性健康:助力明智决策的数字伴侣。
Arch Gynecol Obstet. 2025 Jun 21. doi: 10.1007/s00404-025-08065-9.
2
Artificial intelligence and patient education.人工智能与患者教育。
Curr Opin Urol. 2025 May 1;35(3):219-223. doi: 10.1097/MOU.0000000000001267. Epub 2025 Feb 12.
3
Application of ChatGPT-assisted problem-based learning teaching method in clinical medical education.ChatGPT辅助的基于问题的学习教学方法在临床医学教育中的应用。
BMC Med Educ. 2025 Jan 11;25(1):50. doi: 10.1186/s12909-024-06321-1.