• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

对于患者关于髋关节镜检查的问题,ChatGPT 比谷歌搜索是更具学术性的信息来源吗?对最常见问题的分析。

Is ChatGPT a more academic source than google searches for patient questions about hip arthroscopy? An analysis of the most frequently asked questions.

作者信息

Eravsar Necati Bahadir, Aydin Mahmud, Eryilmaz Atahan, Turemis Cihangir, Surucu Serkan, Jimenez Andrew E

机构信息

Johns Hopkins University, Department of Orthopaedic Surgery, Baltimore, MD, USA; S.B.U. Haydarpasa Numune Training and Research Hospital, Istanbul 34668, Turkey.

Sisli Memorial Hospital, Istanbul, 34384, Turkey.

出版信息

J ISAKOS. 2025 Jun;12:100892. doi: 10.1016/j.jisako.2025.100892. Epub 2025 May 3.

DOI:10.1016/j.jisako.2025.100892
PMID:40324563
Abstract

OBJECTIVES

The purpose of this study was to compare the reliability and accuracy of responses provided to patients about hip arthroscopy (HA) by Chat Generative Pre-Trained Transformer (ChatGPT), an artificial intelligence (AI) and large language model (LLM) online program, with those obtained through a contemporary Google Search for frequently asked questions (FAQs) regarding HA.

METHODS

"HA" was entered into Google Search and ChatGPT, and the 15 most common FAQs and the answers were determined. In Google Search, the FAQs were obtained from the "People also ask" section. ChatGPT was queried to provide the 15 most common FAQs and subsequent answers. The Rothwell system groups the questions under 10 subheadings. Responses of ChatGPT and Google Search engines were compared.

RESULTS

Timeline of recovery (23.3%) and technical details (20%) were the most common categories of questions. ChatGPT produced significantly more data in the technical details category (33.3% vs. 6.6%; p-value ​= ​0.0455) than in the other categories. The most FAQs were academic in nature for both Google web search (46.6%) and ChatGPT (93.3%). ChatGPT provided significantly more academic references than Google web searches (93.3% vs. 46.6%). Conversely, Google web search cited more medical practice references (20% vs. 0%), single surgeon websites (26% vs. 0%), and government websites (6% vs. 0%) more frequently than ChatGPT.

CONCLUSION

ChatGPT performed similarly to Google searches for information about HA. Compared to Google, ChatGPT provided significantly more academic sources for its answers to patient questions.

LEVEL OF EVIDENCE

Level IV.

摘要

目的

本研究旨在比较由人工智能和大型语言模型在线程序Chat Generative Pre-Trained Transformer(ChatGPT)提供给患者的关于髋关节镜检查(HA)的回答的可靠性和准确性,与通过当代谷歌搜索获取的关于HA常见问题(FAQ)的回答进行比较。

方法

在谷歌搜索和ChatGPT中输入“HA”,确定15个最常见的FAQ及其答案。在谷歌搜索中,FAQ从“人们也问”部分获取。向ChatGPT询问以提供15个最常见的FAQ及后续答案。Rothwell系统将问题归为10个副标题下。比较ChatGPT和谷歌搜索引擎的回答。

结果

恢复时间线(23.3%)和技术细节(20%)是最常见的问题类别。ChatGPT在技术细节类别中产生的数据(33.3%对6.6%;p值 = 0.0455)比其他类别显著更多。对于谷歌网络搜索(46.6%)和ChatGPT(93.3%),大多数FAQ本质上是学术性的。ChatGPT提供的学术参考文献比谷歌网络搜索显著更多(93.3%对46.6%)。相反,谷歌网络搜索比ChatGPT更频繁地引用医疗实践参考文献(20%对0%)、单个外科医生网站(26%对0%)和政府网站(6%对0%)。

结论

ChatGPT在搜索关于HA的信息方面表现与谷歌类似。与谷歌相比,ChatGPT为其回答患者问题提供了显著更多的学术来源。

证据水平

四级。

相似文献

1
Is ChatGPT a more academic source than google searches for patient questions about hip arthroscopy? An analysis of the most frequently asked questions.对于患者关于髋关节镜检查的问题,ChatGPT 比谷歌搜索是更具学术性的信息来源吗?对最常见问题的分析。
J ISAKOS. 2025 Jun;12:100892. doi: 10.1016/j.jisako.2025.100892. Epub 2025 May 3.
2
Do ChatGPT and Google differ in answers to commonly asked patient questions regarding total shoulder and total elbow arthroplasty?ChatGPT 和谷歌在回答有关全肩和全肘人工关节置换术的常见患者问题方面是否存在差异?
J Shoulder Elbow Surg. 2024 Aug;33(8):e429-e437. doi: 10.1016/j.jse.2023.11.014. Epub 2024 Jan 3.
3
Evaluating the readability, quality, and reliability of responses generated by ChatGPT, Gemini, and Perplexity on the most commonly asked questions about Ankylosing spondylitis.评估ChatGPT、Gemini和Perplexity针对强直性脊柱炎最常见问题生成的回答的可读性、质量和可靠性。
PLoS One. 2025 Jun 18;20(6):e0326351. doi: 10.1371/journal.pone.0326351. eCollection 2025.
4
Evaluating ChatGPT as a patient resource for frequently asked questions about lung cancer surgery-a pilot study.评估ChatGPT作为肺癌手术常见问题患者资源的可行性——一项试点研究。
J Thorac Cardiovasc Surg. 2025 Apr;169(4):1174-1180.e18. doi: 10.1016/j.jtcvs.2024.09.030. Epub 2024 Sep 24.
5
ChatGPT-4.0 vs. Google: Which Provides More Academic Answers to Patients' Questions on Arthroscopic Meniscus Repair?ChatGPT-4.0与谷歌:哪一个能为患者关于关节镜半月板修复的问题提供更多学术性答案?
Cureus. 2024 Dec 25;16(12):e76380. doi: 10.7759/cureus.76380. eCollection 2024 Dec.
6
ChatGPT-4 Performs Clinical Information Retrieval Tasks Using Consistently More Trustworthy Resources Than Does Google Search for Queries Concerning the Latarjet Procedure.对于有关拉塔热手术的查询,ChatGPT-4在执行临床信息检索任务时,使用的资源始终比谷歌搜索更可靠。
Arthroscopy. 2025 Mar;41(3):588-597. doi: 10.1016/j.arthro.2024.05.025. Epub 2024 Jun 25.
7
Accuracy and Readability of ChatGPT Responses to Patient-Centric Strabismus Questions.ChatGPT对以患者为中心的斜视问题的回答的准确性和可读性。
J Pediatr Ophthalmol Strabismus. 2025 May-Jun;62(3):220-227. doi: 10.3928/01913913-20250110-02. Epub 2025 Feb 19.
8
American Academy of Orthopaedic Surgeons OrthoInfo provides more readable information regarding rotator cuff injury than ChatGPT.美国矫形外科医师学会的OrthoInfo提供了比ChatGPT更具可读性的关于肩袖损伤的信息。
J ISAKOS. 2025 Feb 12;12:100841. doi: 10.1016/j.jisako.2025.100841.
9
Can Patients Rely on ChatGPT to Answer Hand Pathology-Related Medical Questions?患者能否依靠ChatGPT回答手部病理学相关的医学问题?
Hand (N Y). 2024 Apr 23:15589447241247246. doi: 10.1177/15589447241247246.
10
Dr. Google vs. Dr. ChatGPT: Exploring the Use of Artificial Intelligence in Ophthalmology by Comparing the Accuracy, Safety, and Readability of Responses to Frequently Asked Patient Questions Regarding Cataracts and Cataract Surgery.谷歌医生与ChatGPT医生:通过比较关于白内障及白内障手术的常见患者问题的回答的准确性、安全性和可读性,探索人工智能在眼科领域的应用。
Semin Ophthalmol. 2024 Aug;39(6):472-479. doi: 10.1080/08820538.2024.2326058. Epub 2024 Mar 22.