Suppr超能文献

生成式人工智能与非药物偏倚:一项关于癌症患者性健康交流的实验研究。

Generative artificial intelligence and non-pharmacological bias: an experimental study on cancer patient sexual health communications.

机构信息

Medical Data Mathematical Reasoning Team, Advanced Data Science Project, RIKEN Information R&D and Strategy Headquarters, RIKEN, Yokohama, Japan

Department of Artificial Intelligence Medicine, Graduate School of Medicine, Chiba University, Chiba, Japan.

出版信息

BMJ Health Care Inform. 2024 Apr 4;31(1):e100924. doi: 10.1136/bmjhci-2023-100924.

Abstract

The objective of this study was to explore the feature of generative artificial intelligence (AI) in asking sexual health among cancer survivors, which are often challenging for patients to discuss. We employed the Generative Pre-trained Transformer-3.5 (GPT) as the generative AI platform and used DocsBot for citation retrieval (June 2023). A structured prompt was devised to generate 100 questions from the AI, based on epidemiological survey data regarding sexual difficulties among cancer survivors. These questions were submitted to Bot1 (standard GPT) and Bot2 (sourced from two clinical guidelines). No censorship of sexual expressions or medical terms occurred. Despite the lack of reflection on guideline recommendations, 'consultation' was significantly more prevalent in both bots' responses compared with pharmacological interventions, with ORs of 47.3 (p<0.001) in Bot1 and 97.2 (p<0.001) in Bot2. Generative AI can serve to provide health information on sensitive topics such as sexual health, despite the potential for policy-restricted content. Responses were biased towards non-pharmacological interventions, which is probably due to a GPT model designed with the 's prohibition policy on replying to medical topics. This shift warrants attention as it could potentially trigger patients' expectations for non-pharmacological interventions.

摘要

本研究旨在探讨生成式人工智能(AI)在询问癌症幸存者性健康问题方面的特点,这些问题通常令患者难以启齿。我们采用生成式预训练转换器-3.5(GPT)作为生成式 AI 平台,并使用 DocsBot 进行引文检索(2023 年 6 月)。根据癌症幸存者性困难的流行病学调查数据,我们设计了一个结构化提示,以从 AI 生成 100 个问题。这些问题被提交给 Bot1(标准 GPT)和 Bot2(来自两个临床指南)。没有对性表达或医学术语进行审查。尽管没有反映出对指南建议的反思,但在 Bot1 和 Bot2 的回答中,“咨询”明显比药物干预更常见,比值比(OR)分别为 47.3(p<0.001)和 97.2(p<0.001)。尽管存在政策限制内容的可能性,但生成式 AI 可以用于提供敏感话题(如性健康)的健康信息。回答存在偏向非药物干预的倾向,这可能是由于 GPT 模型设计中对“禁止回复医学话题”的政策。这种转变值得关注,因为它可能会引发患者对非药物干预的期望。

相似文献

4
The memory systems of the human brain and generative artificial intelligence.人类大脑的记忆系统与生成式人工智能。
Heliyon. 2024 May 24;10(11):e31965. doi: 10.1016/j.heliyon.2024.e31965. eCollection 2024 Jun 15.

本文引用的文献

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验