• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相似文献

1
A comparative study of ChatGPT 4o and DeepSeek in addressing CIED infection-related questions: Accuracy and readability assessment.ChatGPT 4o与DeepSeek在解决心脏植入电子设备(CIED)感染相关问题方面的比较研究:准确性和可读性评估
Medicine (Baltimore). 2026 Jan 30;105(5):e47493. doi: 10.1097/MD.0000000000047493.
2
Evaluating DeepResearch and DeepThink in anterior cruciate ligament surgery patient education: ChatGPT-4o excels in comprehensiveness, DeepSeek R1 leads in clarity and readability of orthopaedic information.评估DeepResearch和DeepThink在前交叉韧带手术患者教育中的作用:ChatGPT-4o在全面性方面表现出色,DeepSeek R1在骨科信息的清晰度和可读性方面领先。
Knee Surg Sports Traumatol Arthrosc. 2025 Jun 1. doi: 10.1002/ksa.12711.
3
Evaluating Artificial Intelligence-Generated Patient Education Materials for Bariatric Surgery: Comparative Analysis of Response Quality, Reliability, and Readability Across ChatGPT and DeepSeek Models.
Obes Surg. 2025 Nov;35(11):4628-4638. doi: 10.1007/s11695-025-08249-x. Epub 2025 Sep 27.
4
Performance of Advanced Artificial Intelligence Models in Pulp Therapy for Immature Permanent Teeth: A Comparison of ChatGPT-4 Omni, DeepSeek, and Gemini Advanced in Accuracy, Completeness, Response Time, and Readability.先进人工智能模型在年轻恒牙牙髓治疗中的表现:ChatGPT-4 Omni、DeepSeek和Gemini Advanced在准确性、完整性、响应时间和可读性方面的比较
J Endod. 2025 Aug 22. doi: 10.1016/j.joen.2025.08.011.
5
AI Chatbots in Answering Questions Related to Ocular Oncology: A Comparative Study Between DeepSeek v3, ChatGPT-4o, and Gemini 2.0.人工智能聊天机器人在回答与眼部肿瘤学相关问题中的应用:DeepSeek v3、ChatGPT-4o和Gemini 2.0的比较研究
Cureus. 2025 Aug 22;17(8):e90773. doi: 10.7759/cureus.90773. eCollection 2025 Aug.
6
Evaluating ChatGPT and DeepSeek in postdural puncture headache management: a comparative study with international consensus guidelines.评估ChatGPT和DeepSeek在硬膜穿刺后头痛管理中的应用:与国际共识指南的对比研究
BMC Neurol. 2025 Jul 1;25(1):264. doi: 10.1186/s12883-025-04280-8.
7
Evaluating Artificial Intelligence in Patient Education: DeepSeek-V3 Versus ChatGPT-4o in Answering Common Questions on Laparoscopic Cholecystectomy.评估患者教育中的人工智能:DeepSeek-V3与ChatGPT-4o在回答腹腔镜胆囊切除术常见问题方面的比较
ANZ J Surg. 2025 Jun 11. doi: 10.1111/ans.70198.
8
Evaluating the Effectiveness of Generative AI for the Creation of Patient Education Materials on Coronary Heart Disease: A Comparative Study.
JMIR Form Res. 2025 Nov 21;9:e78816. doi: 10.2196/78816.
9
Comparative study of technical and patient-related question answering quality of DeepSeek-R1 and ChatGPT-4o in the field of oral and maxillofacial surgery.
Oral Maxillofac Surg. 2025 Sep 29;29(1):163. doi: 10.1007/s10006-025-01464-x.
10
Evaluating Generative AI Large Language Models for Urticaria Management: A Comparative Analysis of DeepSeek-R1 and ChatGPT-4o.评估用于荨麻疹管理的生成式人工智能大语言模型:DeepSeek-R1 与 ChatGPT-4o 的比较分析
Clin Transl Allergy. 2025 Nov;15(11):e70113. doi: 10.1002/clt2.70113.

本文引用的文献

1
Harnessing bacterial immunity: CRISPR-Cas system as a versatile tool in combating pathogens and revolutionizing medicine.利用细菌免疫:CRISPR-Cas系统作为对抗病原体和变革医学的通用工具。
Front Cell Infect Microbiol. 2025 May 30;15:1588446. doi: 10.3389/fcimb.2025.1588446. eCollection 2025.
2
DeepSeek: the "Watson" to doctors-from assistance to collaboration.深度探索:医生的“沃森”——从辅助到协作。
J Thorac Dis. 2025 Feb 28;17(2):1103-1105. doi: 10.21037/jtd-2025b-03.
3
Comparative evaluation of ChatGPT-4, ChatGPT-3.5 and Google Gemini on PCOS assessment and management based on recommendations from the 2023 guideline.基于2023年指南建议对ChatGPT-4、ChatGPT-3.5和谷歌Gemini在多囊卵巢综合征评估与管理方面的比较评估
Endocrine. 2025 Apr;88(1):315-322. doi: 10.1007/s12020-024-04121-7. Epub 2024 Dec 2.
4
Artificial intelligence and clinical guidance in male reproductive health: ChatGPT4.0's AUA/ASRM guideline compliance evaluation.人工智能与男性生殖健康临床指导:ChatGPT4.0对美国泌尿外科学会/美国生殖医学学会指南的依从性评估
Andrology. 2025 Feb;13(2):176-183. doi: 10.1111/andr.13693. Epub 2024 Jul 17.
5
Appropriateness of ChatGPT in Answering Heart Failure Related Questions.ChatGPT 在回答心力衰竭相关问题方面的适宜性。
Heart Lung Circ. 2024 Sep;33(9):1314-1318. doi: 10.1016/j.hlc.2024.03.005. Epub 2024 May 31.
6
Evaluating ChatGPT-3.5 and ChatGPT-4.0 Responses on Hyperlipidemia for Patient Education.评估ChatGPT-3.5和ChatGPT-4.0关于高脂血症的患者教育回复。
Cureus. 2024 May 25;16(5):e61067. doi: 10.7759/cureus.61067. eCollection 2024 May.
7
Use of ChatGPT for Determining Clinical and Surgical Treatment of Lumbar Disc Herniation With Radiculopathy: A North American Spine Society Guideline Comparison.使用ChatGPT确定伴神经根病的腰椎间盘突出症的临床和手术治疗:与北美脊柱协会指南的比较
Neurospine. 2024 Mar;21(1):149-158. doi: 10.14245/ns.2347052.526. Epub 2024 Jan 31.
8
Update on Cardiovascular Implantable Electronic Device Infections and Their Prevention, Diagnosis, and Management: A Scientific Statement From the American Heart Association: Endorsed by the International Society for Cardiovascular Infectious Diseases.心血管植入电子设备感染及其预防、诊断和管理的最新进展:美国心脏协会的科学声明:得到国际心血管感染性疾病学会的认可。
Circulation. 2024 Jan 9;149(2):e201-e216. doi: 10.1161/CIR.0000000000001187. Epub 2023 Dec 4.
9
[Prevention, diagnosis and treatment of cardiac implantable electronic device infections. Position paper of the Italian Association of Arrhythmology and Cardiac Pacing (AIAC)].
G Ital Cardiol (Rome). 2023 Jul;24(7):551-570. doi: 10.1714/4060.40435.
10
Cardiovascular Implantable Electronic Devices: Less Is (Often) More.
J Am Coll Cardiol. 2023 Jun 20;81(24):2341-2343. doi: 10.1016/j.jacc.2023.05.004.

ChatGPT 4o与DeepSeek在解决心脏植入电子设备(CIED)感染相关问题方面的比较研究:准确性和可读性评估

A comparative study of ChatGPT 4o and DeepSeek in addressing CIED infection-related questions: Accuracy and readability assessment.

作者信息

Yu Chang, Fan Jianhua, Chen Yu, Shen Weihua, Zhang Yini, Li Ling, Wen Jiasheng, Chen Xiaoli

机构信息

Department of Infection Management, Kunshan Integrated Traditional Chinese and Western Medicine Hospital, Suzhou, Jiangsu, China.

Department of Cardiology, Kunshan Hospital of Traditional Chinese Medicine, Suzhou, Jiangsu, China.

出版信息

Medicine (Baltimore). 2026 Jan 30;105(5):e47493. doi: 10.1097/MD.0000000000047493.

DOI:10.1097/MD.0000000000047493
PMID:41630320
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12863850/
Abstract

This study aimed to compare the effectiveness of 2 artificial intelligence (AI) models, ChatGPT 4o and DeepSeek, in responding to questions about infections associated with cardiovascular implantable electronic devices (CIED). The focus was on evaluating their accuracy and readability, which are critical for their use in clinical settings. A comparative analysis was conducted using 30 questions based on American Heart Association's guidelines for CIED-related infections. Each question was asked to both AI models under 2 conditions: once without additional context and once with guideline-based prompts. Accuracy was assessed using a 4-level grading scale by 2 cardiovascular experts. Readability was measured using the Flesch-Kincaid Grade score and word-count metrics. Without guideline prompts, ChatGPT 4o provided comprehensive answers for 24 out of 30 questions (80.00%), with 5 correct but incomplete answers (16.67%) and one partially correct answer (3.33%). DeepSeek also provided comprehensive answers for 24 questions (80.00%) but had 6 correct but incomplete answers (20.00%). With guideline prompts, ChatGPT 4o's comprehensive answer rate increased to 93.33% (28/30), while DeepSeek's rate rose to 90.00% (27/30). No significant difference in overall accuracy was found (P = .34). In terms of readability, ChatGPT 4o had a higher word count (859.10 ± 235.90) compared to DeepSeek (526.27 ± 100.45), with a statistically significant difference (P <.01). The Flesch-Kincaid Grade Score for ChatGPT 4o (15.40 ± 1.18) was higher than that of DeepSeek's (13.91 ± 1.42), indicating more complex responses (P <.01). With guidelines, both models showed reduced verbosity, with ChatGPT 4o's word-count dropping to (624.00 ± 249.01) and DeepSeek's to (549.43 ± 117.40); however, this change was not statistically significant (P = .13). Similarly, slight improvements in readability with guidelines were observed for both models, but these were not statistically significant (P = .11). Both AI models demonstrated the ability to provide accurate and clinically relevant information for managing CIED infections. The use of guideline-based prompts significantly improved the completeness of their responses. ChatGPT 4o provided more detailed answers, while DeepSeek produced more concise, potentially easier-to-understand outputs.

摘要

本研究旨在比较两种人工智能(AI)模型ChatGPT 4o和豆包在回答与心血管植入式电子设备(CIED)相关感染问题方面的有效性。重点是评估它们的准确性和可读性,这对其在临床环境中的应用至关重要。根据美国心脏协会关于CIED相关感染的指南,使用30个问题进行了对比分析。在两种情况下,分别向两个AI模型提出每个问题:一次无额外背景信息,一次有基于指南的提示。由两位心血管专家使用4级评分量表评估准确性。使用弗莱施-金凯德年级水平得分和单词计数指标来衡量可读性。在没有指南提示的情况下,ChatGPT 4o对30个问题中的24个提供了全面答案(80.00%),5个答案正确但不完整(16.67%),1个部分正确答案(3.33%)。豆包也对24个问题提供了全面答案(80.00%),但有6个答案正确但不完整(20.00%)。有指南提示时,ChatGPT 4o的全面答案率提高到93.33%(28/30),而豆包的全面答案率提高到90.00%(27/30)。未发现总体准确性有显著差异(P = 0.34)。在可读性方面,ChatGPT 4o的单词计数(859.10±235.90)高于豆包(526.27±100.45),差异具有统计学意义(P < 0.01)。ChatGPT 4o的弗莱施-金凯德年级水平得分(15.40±1.18)高于豆包(13.91±1.42),表明其回答更复杂(P < 0.01)。有指南时,两个模型的表述都有所简化,ChatGPT 4o的单词计数降至(624.00±249.01),豆包降至(549.43±117.40);然而,这种变化无统计学意义(P = 0.13)。同样,两个模型在有指南时可读性都略有提高,但无统计学意义(P = 0.11)。两种AI模型都展示了为管理CIED感染提供准确且与临床相关信息的能力。使用基于指南的提示显著提高了它们回答的完整性。ChatGPT 4o提供了更详细的答案,而豆包产生了更简洁、可能更易于理解的输出。