• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相似文献

1
Quality of Chatbot Responses to the Most Popular Questions Regarding Erectile Dysfunction.聊天机器人对有关勃起功能障碍最常见问题的回答质量。
Urol Res Pract. 2025 Jan 3;50(4):253-260. doi: 10.5152/tud.2025.24098.
2
Can artificial intelligence models serve as patient information consultants in orthodontics?人工智能模型能否在正畸学中充当患者信息顾问?
BMC Med Inform Decis Mak. 2024 Jul 29;24(1):211. doi: 10.1186/s12911-024-02619-8.
3
Exploring AI-chatbots' capability to suggest surgical planning in ophthalmology: ChatGPT versus Google Gemini analysis of retinal detachment cases.探索 AI 聊天机器人在眼科手术规划方面的建议能力:ChatGPT 与 Google Gemini 对视网膜脱离病例的分析比较。
Br J Ophthalmol. 2024 Sep 20;108(10):1457-1469. doi: 10.1136/bjo-2023-325143.
4
Performance of Artificial Intelligence Chatbots in Responding to Patient Queries Related to Traumatic Dental Injuries: A Comparative Study.人工智能聊天机器人在回应与创伤性牙损伤相关的患者咨询中的表现:一项比较研究。
Dent Traumatol. 2025 Jun;41(3):338-347. doi: 10.1111/edt.13020. Epub 2024 Nov 22.
5
Readability, reliability and quality of responses generated by ChatGPT, gemini, and perplexity for the most frequently asked questions about pain.ChatGPT、Gemini和Perplexity针对最常见疼痛问题生成的回答的可读性、可靠性和质量。
Medicine (Baltimore). 2025 Mar 14;104(11):e41780. doi: 10.1097/MD.0000000000041780.
6
Assessing the knowledge of ChatGPT and Google Gemini in answering peripheral artery disease-related questions.评估ChatGPT和谷歌Gemini在回答外周动脉疾病相关问题方面的知识水平。
Vascular. 2025 Jan 21:17085381251315999. doi: 10.1177/17085381251315999.
7
Comparative evaluation of ChatGPT-4, ChatGPT-3.5 and Google Gemini on PCOS assessment and management based on recommendations from the 2023 guideline.基于2023年指南建议对ChatGPT-4、ChatGPT-3.5和谷歌Gemini在多囊卵巢综合征评估与管理方面的比较评估
Endocrine. 2025 Apr;88(1):315-322. doi: 10.1007/s12020-024-04121-7. Epub 2024 Dec 2.
8
Comparative performance of artificial intelligence models in rheumatology board-level questions: evaluating Google Gemini and ChatGPT-4o.人工智能模型在风湿病委员会级问题中的比较性能:评估 Google Gemini 和 ChatGPT-4o。
Clin Rheumatol. 2024 Nov;43(11):3507-3513. doi: 10.1007/s10067-024-07154-5. Epub 2024 Sep 28.
9
Comparing answers of ChatGPT and Google Gemini to common questions on benign anal conditions.比较ChatGPT和谷歌Gemini对常见肛门良性疾病问题的回答。
Tech Coloproctol. 2025 Jan 26;29(1):57. doi: 10.1007/s10151-024-03096-x.
10
Assessment of readability, reliability, and quality of ChatGPT®, BARD®, Gemini®, Copilot®, Perplexity® responses on palliative care.评估 ChatGPT®、BARD®、 Gemini®、Copilot®、Perplexity® 在姑息治疗方面的可读性、可靠性和质量。
Medicine (Baltimore). 2024 Aug 16;103(33):e39305. doi: 10.1097/MD.0000000000039305.

本文引用的文献

1
Unlocking Health Literacy: The Ultimate Guide to Hypertension Education From ChatGPT Versus Google Gemini.解锁健康素养:ChatGPT与谷歌Gemini高血压教育终极指南
Cureus. 2024 May 8;16(5):e59898. doi: 10.7759/cureus.59898. eCollection 2024 May.
2
Comparison of the Audiological Knowledge of Three Chatbots: ChatGPT, Bing Chat, and Bard.三款聊天机器人的听力学知识比较:ChatGPT、必应聊天和巴德
Audiol Neurootol. 2024;29(6):457-463. doi: 10.1159/000538983. Epub 2024 May 6.
3
Google Gemini and Bard artificial intelligence chatbot performance in ophthalmology knowledge assessment.谷歌 Gemini 和巴德人工智能聊天机器人在眼科知识评估中的表现。
Eye (Lond). 2024 Sep;38(13):2530-2535. doi: 10.1038/s41433-024-03067-4. Epub 2024 Apr 13.
4
Chatbot Reliability in Managing Thoracic Surgical Clinical Scenarios.胸外科临床场景中聊天机器人的可靠性。
Ann Thorac Surg. 2024 Jul;118(1):275-281. doi: 10.1016/j.athoracsur.2024.03.023. Epub 2024 Apr 2.
5
Examining how information presentation methods and a chatbot impact the use and effectiveness of electronic health record patient portals: An exploratory study.考察信息呈现方式和聊天机器人如何影响电子健康记录患者门户的使用和效果:一项探索性研究。
Patient Educ Couns. 2024 Feb;119:108055. doi: 10.1016/j.pec.2023.108055. Epub 2023 Nov 5.
6
Credibility of ChatGPT in the assessment of obesity in type 2 diabetes according to the guidelines.根据指南评估 2 型糖尿病患者肥胖时 ChatGPT 的可信度。
Int J Obes (Lond). 2024 Feb;48(2):271-275. doi: 10.1038/s41366-023-01410-5. Epub 2023 Nov 11.
7
Enhancing Kidney Transplant Care through the Integration of Chatbot.通过整合聊天机器人提升肾移植护理水平。
Healthcare (Basel). 2023 Sep 12;11(18):2518. doi: 10.3390/healthcare11182518.
8
Evaluating the Sensitivity, Specificity, and Accuracy of ChatGPT-3.5, ChatGPT-4, Bing AI, and Bard Against Conventional Drug-Drug Interactions Clinical Tools.评估ChatGPT-3.5、ChatGPT-4、必应人工智能和巴德相对于传统药物相互作用临床工具的敏感性、特异性和准确性。
Drug Healthc Patient Saf. 2023 Sep 20;15:137-147. doi: 10.2147/DHPS.S425858. eCollection 2023.
9
Assessing the accuracy and completeness of artificial intelligence language models in providing information on methotrexate use.评估人工智能语言模型在提供甲氨蝶呤使用信息方面的准确性和完整性。
Rheumatol Int. 2024 Mar;44(3):509-515. doi: 10.1007/s00296-023-05473-5. Epub 2023 Sep 25.
10
Large Language Models in Hematology Case Solving: A Comparative Study of ChatGPT-3.5, Google Bard, and Microsoft Bing.大语言模型在血液学病例解决中的应用:ChatGPT-3.5、谷歌巴德和微软必应的比较研究
Cureus. 2023 Aug 21;15(8):e43861. doi: 10.7759/cureus.43861. eCollection 2023 Aug.

聊天机器人对有关勃起功能障碍最常见问题的回答质量。

Quality of Chatbot Responses to the Most Popular Questions Regarding Erectile Dysfunction.

作者信息

Barlas İrfan Şafak, Tunç Lütfi

机构信息

Clinic of Urology, Ankara Acibadem Hospital, Ankara, Türkiye.

出版信息

Urol Res Pract. 2025 Jan 3;50(4):253-260. doi: 10.5152/tud.2025.24098.

DOI:10.5152/tud.2025.24098
PMID:39873458
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11883663/
Abstract

OBJECTIVE

Erectile dysfunction (ED) is a common cause of male sexual dysfunction. We aimed to evaluate the quality of ChatGPT and Gemini's responses to the most frequently asked questions about ED.

METHODS

This study was conducted as a cross-sectional, observational study. Google Trends was used to determine the most frequently asked questions on the internet. ChatGPT-3.5 and Gemini were compared for these chatbots' answers to the questions about ED. Two urologists with board certificates assessed the quality of responses using the Global Quality Score (GQS).

RESULTS

Fifteen questions about ED were included according to the Google Trends. ChatGPT was able to answer all the questions systematically, whereas Gemini could not answer two questions. Upon assessing the quality of the responses provided by both researchers with the GQS, it was observed that the frequency of low-quality responses from Gemini exceeded that of ChatGPT. The agreement between researchers was 92% for ChatGPT and 95% for Gemini.

CONCLUSION

Despite the expeditious and comprehensive answers provided by chatbots, we identified inadequacies in their responses related to ED. In their current state, they cannot replace the patient-centered approach of healthcare professionals and require further development.

摘要

目的

勃起功能障碍(ED)是男性性功能障碍的常见原因。我们旨在评估ChatGPT和Gemini对有关ED的最常见问题的回答质量。

方法

本研究作为一项横断面观察性研究进行。利用谷歌趋势来确定互联网上最常见的问题。比较了ChatGPT-3.5和Gemini对有关ED问题的回答。两名获得委员会认证的泌尿科医生使用全球质量评分(GQS)评估回答的质量。

结果

根据谷歌趋势,纳入了15个有关ED的问题。ChatGPT能够系统地回答所有问题,而Gemini无法回答两个问题。在用GQS评估两位研究人员提供的回答质量时,发现Gemini低质量回答的频率超过了ChatGPT。研究人员之间对ChatGPT的一致性为92%,对Gemini为95%。

结论

尽管聊天机器人提供了迅速而全面的回答,但我们发现它们有关ED的回答存在不足之处。就目前的状态而言,它们无法取代以患者为中心的医疗专业人员的方法,需要进一步发展。