• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相似文献

1
The potential of chatbots in chronic venous disease patient management.聊天机器人在慢性静脉疾病患者管理中的潜力。
JVS Vasc Insights. 2023;1. doi: 10.1016/j.jvsvi.2023.100019. Epub 2023 Jun 19.
2
The performance of artificial intelligence chatbot large language models to address skeletal biology and bone health queries.人工智能聊天机器人大型语言模型在解决骨骼生物学和骨骼健康问题方面的表现。
J Bone Miner Res. 2024 Mar 22;39(2):106-115. doi: 10.1093/jbmr/zjad007.
3
Understanding How ChatGPT May Become a Clinical Administrative Tool Through an Investigation on the Ability to Answer Common Patient Questions Concerning Ulnar Collateral Ligament Injuries.通过对ChatGPT回答有关尺侧副韧带损伤常见患者问题能力的调查,了解其如何成为临床管理工具。
Orthop J Sports Med. 2024 Jul 31;12(7):23259671241257516. doi: 10.1177/23259671241257516. eCollection 2024 Jul.
4
The Use of Generative AI for Scientific Literature Searches for Systematic Reviews: ChatGPT and Microsoft Bing AI Performance Evaluation.生成式人工智能用于系统评价的科学文献检索:ChatGPT和微软必应人工智能性能评估
JMIR Med Inform. 2024 May 14;12:e51187. doi: 10.2196/51187.
5
Application of artificial intelligence chatbots, including ChatGPT, in education, scholarly work, programming, and content generation and its prospects: a narrative review.人工智能聊天机器人(包括 ChatGPT)在教育、学术工作、编程、内容生成等领域的应用及其前景:叙述性综述。
J Educ Eval Health Prof. 2023;20:38. doi: 10.3352/jeehp.2023.20.38. Epub 2023 Dec 27.
6
Appropriateness of Artificial Intelligence Chatbots in Diabetic Foot Ulcer Management.人工智能聊天机器人在糖尿病足溃疡管理中的适用性
Int J Low Extrem Wounds. 2024 Feb 28:15347346241236811. doi: 10.1177/15347346241236811.
7
Exploring the Performance of ChatGPT-4 in the Taiwan Audiologist Qualification Examination: Preliminary Observational Study Highlighting the Potential of AI Chatbots in Hearing Care.探索 ChatGPT-4 在台湾听力学家资格考试中的表现:初步观察性研究强调 AI 聊天机器人在听力保健中的潜力。
JMIR Med Educ. 2024 Apr 26;10:e55595. doi: 10.2196/55595.
8
Assessing ChatGPT as a Medical Consultation Assistant for Chronic Hepatitis B: Cross-Language Study of English and Chinese.评估ChatGPT作为慢性乙型肝炎医疗咨询助手:英语和中文的跨语言研究
JMIR Med Inform. 2024 Aug 8;12:e56426. doi: 10.2196/56426.
9
Beyond the Hype-The Actual Role and Risks of AI in Today's Medical Practice: Comparative-Approach Study.超越炒作——人工智能在当今医学实践中的实际作用和风险:比较研究方法
JMIR AI. 2024 Jan 22;3:e49082. doi: 10.2196/49082.
10
Performance of Artificial Intelligence Chatbots on Glaucoma Questions Adapted From Patient Brochures.人工智能聊天机器人对改编自患者手册的青光眼问题的回答情况。
Cureus. 2024 Mar 23;16(3):e56766. doi: 10.7759/cureus.56766. eCollection 2024 Mar.

引用本文的文献

1
How Well Do Different AI Language Models Inform Patients About Radiofrequency Ablation for Varicose Veins?不同的人工智能语言模型在向患者介绍静脉曲张的射频消融治疗方面效果如何?
Cureus. 2025 Jun 22;17(6):e86537. doi: 10.7759/cureus.86537. eCollection 2025 Jun.
2
Clinical applications of large language models in medicine and surgery: A scoping review.大型语言模型在医学与外科中的临床应用:一项范围综述
J Int Med Res. 2025 Jul;53(7):3000605251347556. doi: 10.1177/03000605251347556. Epub 2025 Jul 4.
3
Does Artificial Intelligence Bring New Insights in Diagnosing Phlebological Diseases?-A Systematic Review.人工智能在诊断静脉疾病方面带来新见解了吗?——一项系统综述。
Biomedicines. 2025 Mar 22;13(4):776. doi: 10.3390/biomedicines13040776.
4
Examining Healthcare Practitioners' Perceptions of Virtual Physicians, mHealth Applications, and Barriers to Adoption: Insights for Improving Patient Care and Digital Health Integration.审视医疗从业者对虚拟医生、移动健康应用程序的看法以及采用障碍:改善患者护理和数字健康整合的见解
Int J Gen Med. 2025 Apr 1;18:1865-1885. doi: 10.2147/IJGM.S515448. eCollection 2025.
5
Large Language Models for Chatbot Health Advice Studies: A Systematic Review.用于聊天机器人健康建议研究的大语言模型:一项系统综述。
JAMA Netw Open. 2025 Feb 3;8(2):e2457879. doi: 10.1001/jamanetworkopen.2024.57879.
6
Current applications and challenges in large language models for patient care: a systematic review.用于患者护理的大语言模型的当前应用与挑战:一项系统综述
Commun Med (Lond). 2025 Jan 21;5(1):26. doi: 10.1038/s43856-024-00717-2.
7
Analyzing evaluation methods for large language models in the medical field: a scoping review.分析医学领域大语言模型的评价方法:范围综述。
BMC Med Inform Decis Mak. 2024 Nov 29;24(1):366. doi: 10.1186/s12911-024-02709-7.
8
Large language models in patient education: a scoping review of applications in medicine.用于患者教育的大语言模型:医学应用的范围综述
Front Med (Lausanne). 2024 Oct 29;11:1477898. doi: 10.3389/fmed.2024.1477898. eCollection 2024.
9
Large language models for structured reporting in radiology: past, present, and future.用于放射学结构化报告的大语言模型:过去、现在和未来。
Eur Radiol. 2025 May;35(5):2589-2602. doi: 10.1007/s00330-024-11107-6. Epub 2024 Oct 23.
10
Assessing the quality of ChatGPT's responses to questions related to radiofrequency ablation for varicose veins.评估ChatGPT对与静脉曲张射频消融相关问题的回答质量。
J Vasc Surg Venous Lymphat Disord. 2025 Jan;13(1):101985. doi: 10.1016/j.jvsv.2024.101985. Epub 2024 Sep 25.

本文引用的文献

1
Electronic health records and burnout: Time spent on the electronic health record after hours and message volume associated with exhaustion but not with cynicism among primary care clinicians.电子健康记录与倦怠:工作时间外查看电子健康记录的时间以及信息数量与初级保健临床医生的疲惫感相关,但与犬儒主义无关。
J Am Med Inform Assoc. 2020 Apr 1;27(4):531-538. doi: 10.1093/jamia/ocz220.
2
Physician stress and burnout: the impact of health information technology.医生压力与倦怠:健康信息技术的影响。
J Am Med Inform Assoc. 2019 Feb 1;26(2):106-114. doi: 10.1093/jamia/ocy145.

聊天机器人在慢性静脉疾病患者管理中的潜力。

The potential of chatbots in chronic venous disease patient management.

作者信息

Athavale Anand, Baier Jonathan, Ross Elsie, Fukaya Eri

机构信息

Division of Vascular Surgery, Stanford University School of Medicine, Palo Alto.

NextNext LLC, Lovettsville.

出版信息

JVS Vasc Insights. 2023;1. doi: 10.1016/j.jvsvi.2023.100019. Epub 2023 Jun 19.

DOI:10.1016/j.jvsvi.2023.100019
PMID:37701430
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10497234/
Abstract

OBJECTIVE

Health care providers and recipients have been using artificial intelligence and its subfields, such as natural language processing and machine learning technologies, in the form of search engines to obtain medical information for some time now. Although a search engine returns a ranked list of webpages in response to a query and allows the user to obtain information from those links directly, ChatGPT has elevated the interface between humans with artificial intelligence by attempting to provide relevant information in a human-like textual conversation. This technology is being adopted rapidly and has enormous potential to impact various aspects of health care, including patient education, research, scientific writing, pre-visit/post-visit queries, documentation assistance, and more. The objective of this study is to assess whether chatbots could assist with answering patient questions and electronic health record inbox management.

METHODS

We devised two questionnaires: (1) administrative and non-complex medical questions (based on actual inbox questions); and (2) complex medical questions on the topic of chronic venous disease. We graded the performance of publicly available chatbots regarding their potential to assist with electronic health record inbox management. The study was graded by an internist and a vascular medicine specialist independently.

RESULTS

On administrative and non-complex medical questions, ChatGPT 4.0 performed better than ChatGPT 3.5. ChatGPT 4.0 received a grade of 1 on all the questions: 20 of 20 (100%). ChatGPT 3.5 received a grade of 1 on 14 of 20 questions (70%), grade 2 on 4 of 16 questions (20%), grade 3 on 0 questions (0%), and grade 4 on 2/20 questions (10%). On complex medical questions, ChatGPT 4.0 performed the best. ChatGPT 4.0 received a grade of 1 on 15 of 20 questions (75%), grade 2 on 2 of 20 questions (10%), grade 3 on 2 of 20 questions (10%), and grade 4 on 1 of 20 questions (5%). ChatGPT 3.5 received a grade of 1 on 9 of 20 questions (45%), grade 2 on 4 of 20 questions (20%), grade 3 on 4 of 20 questions (20%), and grade 4 on 3 of 20 questions (15%). Clinical Camel received a grade of 1 on 0 of 20 questions (0%), grade 2 on 5 of 20 questions (25%), grade 3 on 5 of 20 questions (25%), and grade 4 on 10 of 20 questions (50%).

CONCLUSIONS

Based on our interactions with ChatGPT regarding the topic of chronic venous disease, it is plausible that in the future, this technology may be used to assist with electronic health record inbox management and offload medical staff. However, for this technology to receive regulatory approval to be used for that purpose, it will require extensive supervised training by subject experts, have guardrails to prevent "hallucinations" and maintain confidentiality, and prove that it can perform at a level comparable to (if not better than) humans. (JVS-Vascular Insights 2023;1:100019.).

摘要

目的

一段时间以来,医疗服务提供者和接受者一直在以搜索引擎的形式使用人工智能及其子领域,如自然语言处理和机器学习技术来获取医学信息。虽然搜索引擎会根据查询返回一个网页排名列表,并允许用户直接从这些链接中获取信息,但ChatGPT通过尝试在类似人类的文本对话中提供相关信息,提升了人与人工智能之间的交互界面。这项技术正在迅速被采用,并且有巨大的潜力影响医疗保健的各个方面,包括患者教育、研究、科学写作、就诊前/就诊后咨询、文档协助等等。本研究的目的是评估聊天机器人是否可以协助回答患者问题和管理电子健康记录收件箱。

方法

我们设计了两份问卷:(1)行政和非复杂医学问题(基于实际收件箱问题);(2)关于慢性静脉疾病主题的复杂医学问题。我们对公开可用的聊天机器人在协助管理电子健康记录收件箱方面的潜力进行了评分。该研究由一名内科医生和一名血管医学专家独立评分。

结果

在行政和非复杂医学问题上,ChatGPT 4.0的表现优于ChatGPT 3.5。ChatGPT 4.0在所有问题上的得分为1级:20个问题中的20个(100%)。ChatGPT 3.5在20个问题中的14个(70%)得分为1级,在16个问题中的4个(20%)得分为2级,0个问题(0%)得分为3级,2/20个问题(10%)得分为4级。在复杂医学问题上,ChatGPT 4.0表现最佳。ChatGPT 4.0在20个问题中的15个(75%)得分为1级,20个问题中的2个(10%)得分为2级,20个问题中的2个(10%)得分为3级,20个问题中的1个(5%)得分为4级。ChatGPT 3.5在20个问题中的9个(45%)得分为1级,20个问题中的4个(20%)得分为2级,20个问题中的4个(20%)得分为3级,20个问题中的3个(15%)得分为4级。Clinical Camel在20个问题中的0个(0%)得分为1级,20个问题中的5个(25%)得分为2级,20个问题中的5个(25%)得分为