Suppr超能文献

ChatGPT-4.0与谷歌:哪一个能为患者关于关节镜半月板修复的问题提供更多学术性答案?

ChatGPT-4.0 vs. Google: Which Provides More Academic Answers to Patients' Questions on Arthroscopic Meniscus Repair?

作者信息

Eryilmaz Atahan, Aydin Mahmud, Turemis Cihangir, Surucu Serkan

机构信息

Orthopedic Surgery, Haseki Training and Research Hospital, Istanbul, TUR.

Orthopedic Surgery, Sisli Memorial Hospital, Istanbul, TUR.

出版信息

Cureus. 2024 Dec 25;16(12):e76380. doi: 10.7759/cureus.76380. eCollection 2024 Dec.

Abstract

Purpose The purpose of this study was to evaluate the ability of a Chat Generative Pre-trained Transformer (ChatGPT) to provide academic answers to frequently asked questions using a comparison with Google web search FAQs and answers. This study attempted to determine what patients ask on Google and ChatGPT and whether ChatGPT and Google provide factual information for patients about arthroscopic meniscus repair. Method A cleanly installed Google Chrome browser and ChatGPT were used to ensure no individual cookies, browsing history, other side data, or sponsored sites. The term "arthroscopic meniscus repair" was entered into the Google Chrome browser and ChatGPT. The first 15 frequently asked questions (FAQs), answers, and sources of answers to FAQs were identified from both ChatGPT and Google search engines. Results Timeline of recovery (20%) and technical details (20%) were the most commonly asked question categories of a total of 30 questions. Technical details and timeline of recovery questions were more commonly asked on ChatGPT compared to Google (technical detail: 33.3% vs. 6.6%, p=0.168; timeline of recovery: 26.6% vs. 13.3%, p=0.651). Answers to questions were more commonly from academic websites in website categories in ChatGPT compared to Google (93.3% vs. 20%, p=0.0001). The most common answers to frequently asked questions were academic (20%) and commercial (20%) in Google. Conclusion Compared to Google, ChatGPT provided significantly fewer references to commercial content and offered responses that were more aligned with academic sources. ChatGPT may be a valuable adjunct in patient education when used under physician supervision, ensuring information aligns with evidence-based practices.

摘要

目的 本研究的目的是通过与谷歌网络搜索常见问题解答及答案进行比较,评估聊天生成预训练变换器(ChatGPT)提供学术性常见问题答案的能力。本研究试图确定患者在谷歌和ChatGPT上询问的内容,以及ChatGPT和谷歌是否为患者提供有关关节镜半月板修复的事实性信息。方法 使用干净安装的谷歌浏览器和ChatGPT,以确保没有个人cookies、浏览历史、其他附带数据或赞助网站。在谷歌浏览器和ChatGPT中输入“关节镜半月板修复”一词。从ChatGPT和谷歌搜索引擎中识别出前15个常见问题、答案及答案来源。结果 在总共30个问题中,恢复时间线(20%)和技术细节(20%)是最常被问到的问题类别。与谷歌相比,ChatGPT上关于技术细节和恢复时间线的问题更常见(技术细节:33.3%对6.6%,p = 0.168;恢复时间线:26.6%对13.3%,p = 0.651)。与谷歌相比,ChatGPT中问题答案在网站类别上更常见于学术网站(93.3%对20%,p = 0.0001)。在谷歌上,常见问题的最常见答案是学术性的(20%)和商业性的(20%)。结论 与谷歌相比,ChatGPT提供的商业内容参考显著更少,其提供的回答更符合学术来源。在医生监督下使用时,ChatGPT在患者教育中可能是一个有价值的辅助工具,可确保信息符合循证实践。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/165b/11760333/4bcb851132b7/cureus-0016-00000076380-i01.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验