• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相似文献

1
Comparative Evaluation of Information Quality on Colon Cancer for Patients: A Study of ChatGPT-4 and Google.患者结肠癌信息质量的比较评估:ChatGPT-4与谷歌的研究
Cureus. 2024 Nov 19;16(11):e73989. doi: 10.7759/cureus.73989. eCollection 2024 Nov.
2
Readability, reliability and quality of responses generated by ChatGPT, gemini, and perplexity for the most frequently asked questions about pain.ChatGPT、Gemini和Perplexity针对最常见疼痛问题生成的回答的可读性、可靠性和质量。
Medicine (Baltimore). 2025 Mar 14;104(11):e41780. doi: 10.1097/MD.0000000000041780.
3
Evaluating the Efficacy of ChatGPT as a Patient Education Tool in Prostate Cancer: Multimetric Assessment.评估 ChatGPT 在前列腺癌患者教育中的疗效:多指标评估。
J Med Internet Res. 2024 Aug 14;26:e55939. doi: 10.2196/55939.
4
Assessing the Quality and Reliability of ChatGPT's Responses to Radiotherapy-Related Patient Queries: Comparative Study With GPT-3.5 and GPT-4.评估ChatGPT对放疗相关患者问题回答的质量和可靠性:与GPT-3.5和GPT-4的比较研究
JMIR Cancer. 2025 Apr 16;11:e63677. doi: 10.2196/63677.
5
Appropriateness and readability of Google Bard and ChatGPT-3.5 generated responses for surgical treatment of glaucoma.谷歌巴德和 ChatGPT-3.5 生成的青光眼手术治疗回复的适宜性和可读性。
Rom J Ophthalmol. 2024 Jul-Sep;68(3):243-248. doi: 10.22336/rjo.2024.45.
6
Assessing the readability, quality and reliability of responses produced by ChatGPT, Gemini, and Perplexity regarding most frequently asked keywords about low back pain.评估ChatGPT、Gemini和Perplexity针对有关腰痛的最常见关键词所给出回答的可读性、质量和可靠性。
PeerJ. 2025 Jan 22;13:e18847. doi: 10.7717/peerj.18847. eCollection 2025.
7
Performance of Artificial Intelligence Chatbots in Responding to Patient Queries Related to Traumatic Dental Injuries: A Comparative Study.人工智能聊天机器人在回应与创伤性牙损伤相关的患者咨询中的表现:一项比较研究。
Dent Traumatol. 2025 Jun;41(3):338-347. doi: 10.1111/edt.13020. Epub 2024 Nov 22.
8
Accuracy and Readability of Artificial Intelligence Chatbot Responses to Vasectomy-Related Questions: Public Beware.人工智能聊天机器人对输精管切除术相关问题回答的准确性和可读性:公众需谨慎。
Cureus. 2024 Aug 28;16(8):e67996. doi: 10.7759/cureus.67996. eCollection 2024 Aug.
9
Online Patient Education in Obstructive Sleep Apnea: ChatGPT versus Google Search.阻塞性睡眠呼吸暂停的在线患者教育:ChatGPT与谷歌搜索对比
Healthcare (Basel). 2024 Sep 5;12(17):1781. doi: 10.3390/healthcare12171781.
10
AI Chatbots as Sources of STD Information: A Study on Reliability and Readability.作为性传播疾病信息来源的人工智能聊天机器人:可靠性与可读性研究
J Med Syst. 2025 Apr 3;49(1):43. doi: 10.1007/s10916-025-02178-z.

引用本文的文献

1
Evaluating the Reliability and Quality of Sarcoidosis-Related Information Provided by AI Chatbots.评估人工智能聊天机器人提供的结节病相关信息的可靠性和质量。
Healthcare (Basel). 2025 Jun 5;13(11):1344. doi: 10.3390/healthcare13111344.
2
Enhancing Patient Comprehension of Glomerular Disease Treatments Using ChatGPT.使用ChatGPT提高患者对肾小球疾病治疗的理解
Healthcare (Basel). 2024 Dec 31;13(1):57. doi: 10.3390/healthcare13010057.

本文引用的文献

1
The Role of AI in Hospitals and Clinics: Transforming Healthcare in the 21st Century.人工智能在医院和诊所中的作用:变革21世纪的医疗保健
Bioengineering (Basel). 2024 Mar 29;11(4):337. doi: 10.3390/bioengineering11040337.
2
Dr. Google to Dr. ChatGPT: assessing the content and quality of artificial intelligence-generated medical information on appendicitis.谷歌博士对 ChatGPT 博士:评估人工智能生成的关于阑尾炎的医学信息的内容和质量。
Surg Endosc. 2024 May;38(5):2887-2893. doi: 10.1007/s00464-024-10739-5. Epub 2024 Mar 5.
3
Revolutionizing healthcare: the role of artificial intelligence in clinical practice.人工智能在临床实践中的应用:医疗保健的革命。
BMC Med Educ. 2023 Sep 22;23(1):689. doi: 10.1186/s12909-023-04698-z.
4
Curriculum frameworks and educational programs in artificial intelligence for medical students, residents, and practicing physicians: a scoping review protocol.针对医学生、住院医师和执业医师的人工智能课程框架和教育计划:范围综述方案。
JBI Evid Synth. 2023 Jul 1;21(7):1477-1484. doi: 10.11124/JBIES-22-00374.
5
ChatGPT in medicine: an overview of its applications, advantages, limitations, future prospects, and ethical considerations.医学领域的ChatGPT:其应用、优势、局限性、未来前景及伦理考量概述
Front Artif Intell. 2023 May 4;6:1169595. doi: 10.3389/frai.2023.1169595. eCollection 2023.
6
Evaluation of quality, readability, suitability, and usefulness of online resources available to cancer survivors.评估癌症幸存者可获得的在线资源的质量、可读性、适用性和有用性。
J Cancer Surviv. 2023 Apr;17(2):544-555. doi: 10.1007/s11764-022-01318-5. Epub 2023 Jan 10.
7
Using the Google™ Search Engine for Health Information: Is There a Problem? Case Study: Supplements for Cancer.使用谷歌™搜索引擎获取健康信息:存在问题吗?案例研究:癌症补充剂
Curr Dev Nutr. 2021 Feb 3;5(2):nzab002. doi: 10.1093/cdn/nzab002. eCollection 2021 Feb.
8
Quality and reliability evaluation of current Internet information regarding mesh use in inguinal hernia surgery using HONcode and the DISCERN instrument.使用 HONcode 和 DISCERN 工具评估当前互联网上关于腹股沟疝手术中使用网片的信息的质量和可靠性。
Hernia. 2021 Oct;25(5):1325-1330. doi: 10.1007/s10029-021-02406-8. Epub 2021 Apr 14.
9
Evaluating health information technologies: A systematic review of framework recommendations.评估健康信息技术:框架建议的系统评价。
Int J Med Inform. 2020 Oct;142:104247. doi: 10.1016/j.ijmedinf.2020.104247. Epub 2020 Aug 14.
10
Clinical features and outcome of sporadic colorectal carcinoma in young patients: a cross-sectional analysis from a developing country.年轻患者散发性结直肠癌的临床特征与结局:来自一个发展中国家的横断面分析
ISRN Oncol. 2014 Apr 1;2014:461570. doi: 10.1155/2014/461570. eCollection 2014.

患者结肠癌信息质量的比较评估:ChatGPT-4与谷歌的研究

Comparative Evaluation of Information Quality on Colon Cancer for Patients: A Study of ChatGPT-4 and Google.

作者信息

Kepez Murtaza Salih, Ugur Furkan

机构信息

Department of General Surgery, Hitit University Faculty of Medicine, Çorum, TUR.

出版信息

Cureus. 2024 Nov 19;16(11):e73989. doi: 10.7759/cureus.73989. eCollection 2024 Nov.

DOI:10.7759/cureus.73989
PMID:39703246
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11656641/
Abstract

Introduction This study aimed to evaluate and compare the quality and reliability of information provided by two widely used digital platforms, ChatGPT-4 and Google, on frequently asked questions about colon cancer. With the growing popularity of these platforms, individuals increasingly turn to them for accessible health information, yet questions remain regarding the accuracy and reliability of such content. Given that colon cancer is a prevalent and serious condition, trustworthy information is essential to support patient education, facilitate informed decision-making, and potentially improve patient outcomes. Therefore, the objective was to determine which platform offers more reliable and accurate medical information on colon cancer, using established evaluation criteria to assess the quality of information. Methods Twenty frequently asked questions about colon cancer were selected based on search popularity and relevance to patients and then searched using ChatGPT-4 and Google. Responses were evaluated using tools such as DISCERN (reliability), Global Quality Score (GQS), Journal of the American Medical Association (JAMA) criteria (accuracy), SAM (suitability), Flesch-Kincaid Readability Test, HITS (user experience), and VPI (visibility). Statistical analyses determined significant differences between the platforms (p < 0.05). ChatGPT-4 scored significantly higher than Google on DISCERN, GQS, and JAMA, indicating greater reliability, accuracy, and comprehensibility (p < 0.001). Both platforms showed similar readability scores, but ChatGPT-4 rated higher for patient suitability (SAM, p < 0.01) and user-friendliness (HITS, p < 0.01). Although Google exhibited higher visibility (VPI), the limited HONcode certification raised concerns about the reliability of its results. Results ChatGPT-4 scored significantly higher than Google on DISCERN, GQS, and JAMA criteria, demonstrating superior reliability, accuracy, and comprehensibility (p < 0.001). While both platforms had comparable readability scores on the Flesch-Kincaid Readability Test, ChatGPT-4 was rated as more suitable for patient education according to SAM criteria (p < 0.01). Furthermore, ChatGPT-4 was found to be more user-friendly and offered more structured information based on the HITS scale (p < 0.01). Although Google showed higher visibility according to the VPI, the limited presence of HONcode-certified results raised concerns about the reliability of its information. Conclusion ChatGPT-4 proved to be a more reliable and higher-quality source of medical information compared to Google, particularly for patient queries about colon cancer. AI-based platforms such as ChatGPT-4 hold promise for enhancing patient education and providing accurate medical information, although further research is needed to confirm these findings across different medical topics and larger populations.

摘要

引言 本研究旨在评估和比较两个广泛使用的数字平台ChatGPT-4和谷歌,针对结肠癌常见问题所提供信息的质量和可靠性。随着这些平台越来越受欢迎,人们越来越多地向它们寻求易于获取的健康信息,然而此类内容的准确性和可靠性仍存在疑问。鉴于结肠癌是一种常见且严重的疾病,可靠的信息对于支持患者教育、促进明智决策以及潜在改善患者预后至关重要。因此,本研究的目的是使用既定的评估标准来评估信息质量,以确定哪个平台能提供关于结肠癌更可靠、准确的医学信息。

方法 基于搜索热度以及与患者的相关性,选取了20个关于结肠癌的常见问题,然后分别使用ChatGPT-4和谷歌进行搜索。使用诸如DISCERN(可靠性)、全球质量评分(GQS)、美国医学会杂志(JAMA)标准(准确性)、SAM(适用性)、弗莱什-金凯德可读性测试、HITS(用户体验)和VPI(可见性)等工具对回答进行评估。统计分析确定了两个平台之间的显著差异(p<0.05)。ChatGPT-4在DISCERN、GQS和JAMA上的得分显著高于谷歌,表明其具有更高的可靠性、准确性和可理解性(p<0.001)。两个平台的可读性得分相似,但ChatGPT-4在患者适用性(SAM,p<0.01)和用户友好性(HITS,p<0.01)方面的评分更高。尽管谷歌的可见性(VPI)更高,但其有限的HONcode认证引发了对其结果可靠性的担忧。

结果 ChatGPT-4在DISCERN、GQS和JAMA标准上的得分显著高于谷歌,显示出卓越的可靠性、准确性和可理解性(p<0.001)。虽然在弗莱什-金凯德可读性测试中两个平台的可读性得分相当,但根据SAM标准,ChatGPT-4被评为更适合患者教育(p<0.01)。此外,根据HITS量表,发现ChatGPT-4更用户友好且提供的信息更具结构性(p<0.01)。尽管根据VPI谷歌的可见性更高,但其HONcode认证结果的有限性引发了对其信息可靠性的担忧。

结论 与谷歌相比,ChatGPT-4被证明是一个更可靠、质量更高的医学信息来源,特别是对于患者关于结肠癌的询问。像ChatGPT-4这样基于人工智能的平台有望加强患者教育并提供准确的医学信息,尽管需要进一步研究以在不同医学主题和更大人群中证实这些发现。