• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相似文献

1
Assessment of Artificial Intelligence Chatbot Responses to Top Searched Queries About Cancer.评估人工智能聊天机器人对癌症热门搜索查询的响应
JAMA Oncol. 2023 Oct 1;9(10):1437-1440. doi: 10.1001/jamaoncol.2023.2947.
2
How Well Do Artificial Intelligence Chatbots Respond to the Top Search Queries About Urological Malignancies?人工智能聊天机器人对泌尿系统恶性肿瘤热门搜索查询的响应如何?
Eur Urol. 2024 Jan;85(1):13-16. doi: 10.1016/j.eururo.2023.07.004. Epub 2023 Aug 10.
3
Quality of Information About Kidney Stones from Artificial Intelligence Chatbots.人工智能聊天机器人中有关肾结石信息的质量。
J Endourol. 2024 Oct;38(10):1056-1061. doi: 10.1089/end.2023.0484. Epub 2024 Jul 29.
4
Evaluating the Efficacy of ChatGPT as a Patient Education Tool in Prostate Cancer: Multimetric Assessment.评估 ChatGPT 在前列腺癌患者教育中的疗效:多指标评估。
J Med Internet Res. 2024 Aug 14;26:e55939. doi: 10.2196/55939.
5
Assessing the quality and readability of patient education materials on chemotherapy cardiotoxicity from artificial intelligence chatbots: An observational cross-sectional study.评估人工智能聊天机器人提供的关于化疗心脏毒性的患者教育材料的质量和可读性:一项观察性横断面研究。
Medicine (Baltimore). 2025 Apr 11;104(15):e42135. doi: 10.1097/MD.0000000000042135.
6
Quality of Information on Wilms Tumor From Artificial Intelligence Chatbots: What Are Your Patients and Their Families Reading?人工智能聊天机器人提供的肾母细胞瘤信息质量:你的患者及其家属在阅读什么?
Urology. 2025 Apr;198:130-134. doi: 10.1016/j.urology.2025.01.054. Epub 2025 Feb 4.
7
Accuracy of Prospective Assessments of 4 Large Language Model Chatbot Responses to Patient Questions About Emergency Care: Experimental Comparative Study.前瞻性评估 4 种大型语言模型聊天机器人对患者关于急救护理问题的回答的准确性:实验性对比研究。
J Med Internet Res. 2024 Nov 4;26:e60291. doi: 10.2196/60291.
8
Assessing chatbots ability to produce leaflets on cataract surgery: Bing AI, chatGPT 3.5, chatGPT 4o, ChatSonic, Google Bard, Perplexity, and Pi.评估聊天机器人生成白内障手术宣传册的能力:必应人工智能、ChatGPT 3.5、ChatGPT 4、ChatSonic、谷歌巴德、Perplexity和Pi。
J Cataract Refract Surg. 2025 May 1;51(5):371-375. doi: 10.1097/j.jcrs.0000000000001622.
9
Accuracy and Readability of Artificial Intelligence Chatbot Responses to Vasectomy-Related Questions: Public Beware.人工智能聊天机器人对输精管切除术相关问题回答的准确性和可读性:公众需谨慎。
Cureus. 2024 Aug 28;16(8):e67996. doi: 10.7759/cureus.67996. eCollection 2024 Aug.
10
Performance of Artificial Intelligence Chatbots in Responding to Patient Queries Related to Traumatic Dental Injuries: A Comparative Study.人工智能聊天机器人在回应与创伤性牙损伤相关的患者咨询中的表现:一项比较研究。
Dent Traumatol. 2025 Jun;41(3):338-347. doi: 10.1111/edt.13020. Epub 2024 Nov 22.

引用本文的文献

1
Utilisation of AI-driven chatbots for perioperative health information seeking: a descriptive qualitative study of orthopaedic patients and family members.利用人工智能驱动的聊天机器人获取围手术期健康信息:一项针对骨科患者及其家属的描述性定性研究
BMJ Open. 2025 Sep 4;15(9):e099824. doi: 10.1136/bmjopen-2025-099824.
2
Evaluation of deepseek, gemini, ChatGPT-4o, and perplexity in responding to salivary gland cancer.评估DeepSeek、Gemini、ChatGPT-4o和Perplexity对涎腺癌的回答。
BMC Oral Health. 2025 Aug 23;25(1):1358. doi: 10.1186/s12903-025-06726-4.
3
Enhancing the Readability of Online Pediatric Cataract Education Materials: A Comparative Study of Large Language Models.提高在线儿科白内障教育材料的可读性:大语言模型的比较研究
Transl Vis Sci Technol. 2025 Aug 1;14(8):19. doi: 10.1167/tvst.14.8.19.
4
Evaluating the Quality of Cardiovascular Disease Information From AI Chatbots: A Comparative Study.评估人工智能聊天机器人提供的心血管疾病信息质量:一项比较研究。
Cureus. 2025 Jul 16;17(7):e88085. doi: 10.7759/cureus.88085. eCollection 2025 Jul.
5
Artificial intelligence across the cancer care continuum.贯穿癌症护理全过程的人工智能
Cancer. 2025 Aug 15;131(16):e70050. doi: 10.1002/cncr.70050.
6
Assessing the Accuracy and Readability of Large Language Model Guidance for Patients on Breast Cancer Surgery Preparation and Recovery.评估大型语言模型为患者提供的乳腺癌手术准备和康复指导的准确性和可读性。
J Clin Med. 2025 Aug 1;14(15):5411. doi: 10.3390/jcm14155411.
7
Assessing the Role of Large Language Models Between ChatGPT and DeepSeek in Asthma Education for Bilingual Individuals: Comparative Study.评估ChatGPT和DeepSeek之间的大型语言模型在双语个体哮喘教育中的作用:比较研究
JMIR Med Inform. 2025 Aug 13;13:e65365. doi: 10.2196/65365.
8
Assessing ChatGPT's Educational Potential in Lung Cancer Radiotherapy From Clinician and Patient Perspectives: Content Quality and Readability Analysis.从临床医生和患者角度评估ChatGPT在肺癌放疗中的教育潜力:内容质量与可读性分析
JMIR Cancer. 2025 Aug 13;11:e69783. doi: 10.2196/69783.
9
Potential of AI Chatbots in Online Hair Transplantation Consultations: A Multi-metric Assessment of Three Models.人工智能聊天机器人在在线植发咨询中的潜力:三种模型的多指标评估
Aesthetic Plast Surg. 2025 Aug 8. doi: 10.1007/s00266-025-05103-4.
10
Empowering breast cancer clients through AI chatbots: transforming knowledge and attitudes for enhanced nursing care.通过人工智能聊天机器人增强乳腺癌患者的能力:转变知识和态度以提升护理质量。
BMC Nurs. 2025 Jul 29;24(1):994. doi: 10.1186/s12912-025-03585-w.

评估人工智能聊天机器人对癌症热门搜索查询的响应

Assessment of Artificial Intelligence Chatbot Responses to Top Searched Queries About Cancer.

机构信息

Department of Urology, State University of New York Downstate Health Sciences University, New York.

Department of Urology, New York University School of Medicine, New York.

出版信息

JAMA Oncol. 2023 Oct 1;9(10):1437-1440. doi: 10.1001/jamaoncol.2023.2947.

DOI:10.1001/jamaoncol.2023.2947
PMID:37615960
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10450581/
Abstract

IMPORTANCE

Consumers are increasingly using artificial intelligence (AI) chatbots as a source of information. However, the quality of the cancer information generated by these chatbots has not yet been evaluated using validated instruments.

OBJECTIVE

To characterize the quality of information and presence of misinformation about skin, lung, breast, colorectal, and prostate cancers generated by 4 AI chatbots.

DESIGN, SETTING, AND PARTICIPANTS: This cross-sectional study assessed AI chatbots' text responses to the 5 most commonly searched queries related to the 5 most common cancers using validated instruments. Search data were extracted from the publicly available Google Trends platform and identical prompts were used to generate responses from 4 AI chatbots: ChatGPT version 3.5 (OpenAI), Perplexity (Perplexity.AI), Chatsonic (Writesonic), and Bing AI (Microsoft).

EXPOSURES

Google Trends' top 5 search queries related to skin, lung, breast, colorectal, and prostate cancer from January 1, 2021, to January 1, 2023, were input into 4 AI chatbots.

MAIN OUTCOMES AND MEASURES

The primary outcomes were the quality of consumer health information based on the validated DISCERN instrument (scores from 1 [low] to 5 [high] for quality of information) and the understandability and actionability of this information based on the understandability and actionability domains of the Patient Education Materials Assessment Tool (PEMAT) (scores of 0%-100%, with higher scores indicating a higher level of understandability and actionability). Secondary outcomes included misinformation scored using a 5-item Likert scale (scores from 1 [no misinformation] to 5 [high misinformation]) and readability assessed using the Flesch-Kincaid Grade Level readability score.

RESULTS

The analysis included 100 responses from 4 chatbots about the 5 most common search queries for skin, lung, breast, colorectal, and prostate cancer. The quality of text responses generated by the 4 AI chatbots was good (median [range] DISCERN score, 5 [2-5]) and no misinformation was identified. Understandability was moderate (median [range] PEMAT Understandability score, 66.7% [33.3%-90.1%]), and actionability was poor (median [range] PEMAT Actionability score, 20.0% [0%-40.0%]). The responses were written at the college level based on the Flesch-Kincaid Grade Level score.

CONCLUSIONS AND RELEVANCE

Findings of this cross-sectional study suggest that AI chatbots generally produce accurate information for the top cancer-related search queries, but the responses are not readily actionable and are written at a college reading level. These limitations suggest that AI chatbots should be used supplementarily and not as a primary source for medical information.

摘要

重要性

消费者越来越多地将人工智能 (AI) 聊天机器人作为信息来源。然而,这些聊天机器人生成的癌症信息的质量尚未使用经过验证的工具进行评估。

目的

使用经过验证的工具,描述 4 种 AI 聊天机器人生成的有关皮肤癌、肺癌、乳腺癌、结直肠癌和前列腺癌的信息的质量和存在错误信息的情况。

设计、设置和参与者:这项横断面研究使用经过验证的工具评估了 AI 聊天机器人对与 5 种最常见癌症相关的 5 种最常见查询的文本回复。从 2021 年 1 月 1 日至 2023 年 1 月 1 日,从公共可用的 Google Trends 平台提取搜索数据,并使用相同的提示从 4 种 AI 聊天机器人生成回复:ChatGPT 版本 3.5(OpenAI)、Perplexity(Perplexity.AI)、Chatsonic(Writesonic)和 Bing AI(Microsoft)。

暴露

将 2021 年 1 月 1 日至 2023 年 1 月 1 日期间 Google Trends 上有关皮肤、肺、乳房、结直肠和前列腺癌的前 5 大搜索查询输入到 4 个 AI 聊天机器人中。

主要结果和措施

主要结果是基于经过验证的 DISCERN 工具(信息质量得分为 1[低]至 5[高])的消费者健康信息质量,以及基于患者教育材料评估工具(PEMAT)的可理解性和可操作性领域(理解度得分为 0%-100%,得分越高表示理解度和可操作性越高)的信息可理解性和可操作性。次要结果包括使用 5 项李克特量表评估的错误信息(得分从 1[无错误信息]到 5[高错误信息])和使用弗莱什-金凯德年级水平可读性评分评估的可读性。

结果

分析包括 4 个 AI 聊天机器人对 5 种最常见皮肤、肺、乳房、结直肠和前列腺癌搜索查询的 100 次回复。4 个 AI 聊天机器人生成的文本回复质量良好(中位数[范围]DISCERN 得分,5[2-5]),未发现错误信息。可理解性为中等(中位数[范围]PEMAT 可理解性得分,66.7%[33.3%-90.1%]),可操作性较差(中位数[范围]PEMAT 可操作性得分,20.0%[0%-40.0%])。根据弗莱什-金凯德年级水平评分,回复的写作水平为大学水平。

结论和相关性

这项横断面研究的结果表明,AI 聊天机器人通常可以为顶级癌症相关搜索查询生成准确的信息,但回复不易操作,且写作水平为大学阅读水平。这些局限性表明,AI 聊天机器人应作为辅助工具使用,而不是作为医疗信息的主要来源。