Medical Data Mathematical Reasoning Team, Advanced Data Science Project, RIKEN Information R&D and Strategy Headquarters, RIKEN, Yokohama, Japan
Department of Artificial Intelligence Medicine, Graduate School of Medicine, Chiba University, Chiba, Japan.
BMJ Health Care Inform. 2024 Apr 4;31(1):e100924. doi: 10.1136/bmjhci-2023-100924.
The objective of this study was to explore the feature of generative artificial intelligence (AI) in asking sexual health among cancer survivors, which are often challenging for patients to discuss. We employed the Generative Pre-trained Transformer-3.5 (GPT) as the generative AI platform and used DocsBot for citation retrieval (June 2023). A structured prompt was devised to generate 100 questions from the AI, based on epidemiological survey data regarding sexual difficulties among cancer survivors. These questions were submitted to Bot1 (standard GPT) and Bot2 (sourced from two clinical guidelines). No censorship of sexual expressions or medical terms occurred. Despite the lack of reflection on guideline recommendations, 'consultation' was significantly more prevalent in both bots' responses compared with pharmacological interventions, with ORs of 47.3 (p<0.001) in Bot1 and 97.2 (p<0.001) in Bot2. Generative AI can serve to provide health information on sensitive topics such as sexual health, despite the potential for policy-restricted content. Responses were biased towards non-pharmacological interventions, which is probably due to a GPT model designed with the 's prohibition policy on replying to medical topics. This shift warrants attention as it could potentially trigger patients' expectations for non-pharmacological interventions.
本研究旨在探讨生成式人工智能(AI)在询问癌症幸存者性健康问题方面的特点,这些问题通常令患者难以启齿。我们采用生成式预训练转换器-3.5(GPT)作为生成式 AI 平台,并使用 DocsBot 进行引文检索(2023 年 6 月)。根据癌症幸存者性困难的流行病学调查数据,我们设计了一个结构化提示,以从 AI 生成 100 个问题。这些问题被提交给 Bot1(标准 GPT)和 Bot2(来自两个临床指南)。没有对性表达或医学术语进行审查。尽管没有反映出对指南建议的反思,但在 Bot1 和 Bot2 的回答中,“咨询”明显比药物干预更常见,比值比(OR)分别为 47.3(p<0.001)和 97.2(p<0.001)。尽管存在政策限制内容的可能性,但生成式 AI 可以用于提供敏感话题(如性健康)的健康信息。回答存在偏向非药物干预的倾向,这可能是由于 GPT 模型设计中对“禁止回复医学话题”的政策。这种转变值得关注,因为它可能会引发患者对非药物干预的期望。