文献检索文档翻译深度研究
Suppr Zotero 插件Zotero 插件
邀请有礼套餐&价格历史记录

新学期,新优惠

限时优惠:9月1日-9月22日

30天高级会员仅需29元

1天体验卡首发特惠仅需5.99元

了解详情
不再提醒
插件&应用
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
高级版
套餐订阅购买积分包
AI 工具
文献检索文档翻译深度研究
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2025

Evaluating the performance of ChatGPT in answering questions related to pediatric urology.

作者信息

Caglar Ufuk, Yildiz Oguzhan, Meric Arda, Ayranci Ali, Gelmis Mucahit, Sarilar Omer, Ozgor Faruk

机构信息

Department of Urology, Haseki Training and Research Hospital, Istanbul, Turkey.

Department of Urology, Haseki Training and Research Hospital, Istanbul, Turkey.

出版信息

J Pediatr Urol. 2024 Feb;20(1):26.e1-26.e5. doi: 10.1016/j.jpurol.2023.08.003. Epub 2023 Aug 7.


DOI:10.1016/j.jpurol.2023.08.003
PMID:37596194
Abstract

INTRODUCTION: Artificial intelligence is advancing in various domains, including medicine, and its progress is expected to continue in the future. OBJECTIVE: This research aimed to assess the precision and consistency of ChatGPT's responses to commonly asked inquiries related to pediatric urology. MATERIALS AND METHODS: We examined commonly posed inquiries regarding pediatric urology found on urology association websites, hospitals, and social media platforms. Additionally, we referenced the recommendations tables in the European Urology Association's (EAU) 2022 Guidelines on Pediatric Urology, which contained robust data at the strong recommendation level. All questions were systematically presented to ChatGPT's May 23 Version, and two expert urologists independently assessed and assigned scores ranging from 1 to 4 to each response. RESULTS: A hundred thirty seven questions about pediatric urology were included in the study. The answers to questions resulted in 92.0% completely correct. The completely correct rate in the questions prepared according to the strong recommendations of the EAU guideline was 93.6%. No question was answered completely wrong. The similarity rates of the answers to the repeated questions were between 93.8% and 100%. CONCLUSION: ChatGPT has provided satisfactory responses to inquiries related to pediatric urology. Despite its limitations, it is foreseeable that this continuously evolving platform will occupy a crucial position in the healthcare industry.

摘要

相似文献

[1]
Evaluating the performance of ChatGPT in answering questions related to pediatric urology.

J Pediatr Urol. 2024-2

[2]
Evaluating the performance of ChatGPT in answering questions related to urolithiasis.

Int Urol Nephrol. 2024-1

[3]
Evaluating the performance of ChatGPT in answering questions related to benign prostate hyperplasia and prostate cancer.

Minerva Urol Nephrol. 2023-12

[4]
Assessing the Performance of Chat Generative Pretrained Transformer (ChatGPT) in Answering Andrology-Related Questions.

Urol Res Pract. 2023-11

[5]
Analyzing the Performance of ChatGPT About Osteoporosis.

Cureus. 2023-9-25

[6]
Urological Cancers and ChatGPT: Assessing the Quality of Information and Possible Risks for Patients.

Clin Genitourin Cancer. 2024-4

[7]
Evaluating the Performance of ChatGPT in Urology: A Comparative Study of Knowledge Interpretation and Patient Guidance.

J Endourol. 2024-8

[8]
Can ChatGPT help patients understand their andrological diseases?

Rev Int Androl. 2024-6

[9]
The efficacy of artificial intelligence in urology: a detailed analysis of kidney stone-related queries.

World J Urol. 2024-3-14

[10]
Evaluating ChatGPT's effectiveness and tendencies in Japanese internal medicine.

J Eval Clin Pract. 2024-9

引用本文的文献

[1]
Leveraging ChatGPT to strengthen pediatric healthcare systems: a systematic review.

Eur J Pediatr. 2025-7-12

[2]
Assessment of artificial intelligence performance in answering questions on onabotulinum toxin and sacral neuromodulation.

Investig Clin Urol. 2025-5

[3]
Use of Artificial Intelligence in Vesicoureteral Reflux Disease: A Comparative Study of Guideline Compliance.

J Clin Med. 2025-3-30

[4]
Online Health Information-Seeking in the Era of Large Language Models: Cross-Sectional Web-Based Survey Study.

J Med Internet Res. 2025-3-31

[5]
ChatGPT's competence in responding to urological emergencies.

Ulus Travma Acil Cerrahi Derg. 2025-3

[6]
Evaluating interactions of patients with large language models for medical information.

BJU Int. 2025-6

[7]
Large Language Models for Chatbot Health Advice Studies: A Systematic Review.

JAMA Netw Open. 2025-2-3

[8]
Comparison of the performances between ChatGPT and Gemini in answering questions on viral hepatitis.

Sci Rep. 2025-1-11

[9]
Integrating artificial intelligence in renal cell carcinoma: evaluating ChatGPT's performance in educating patients and trainees.

Transl Cancer Res. 2024-11-30

[10]
Analyzing evaluation methods for large language models in the medical field: a scoping review.

BMC Med Inform Decis Mak. 2024-11-29

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

推荐工具

医学文档翻译智能文献检索