Chow James C L, Li Kay
Department of Radiation Oncology, University of Toronto, Toronto, ON, Canada.
Princess Margaret Cancer Centre, University Health Network, Toronto, ON, Canada.
JMIR Bioinform Biotechnol. 2024 Nov 6;5:e64406. doi: 10.2196/64406.
The integration of chatbots in oncology underscores the pressing need for human-centered artificial intelligence (AI) that addresses patient and family concerns with empathy and precision. Human-centered AI emphasizes ethical principles, empathy, and user-centric approaches, ensuring technology aligns with human values and needs. This review critically examines the ethical implications of using large language models (LLMs) like GPT-3 and GPT-4 (OpenAI) in oncology chatbots. It examines how these models replicate human-like language patterns, impacting the design of ethical AI systems. The paper identifies key strategies for ethically developing oncology chatbots, focusing on potential biases arising from extensive datasets and neural networks. Specific datasets, such as those sourced from predominantly Western medical literature and patient interactions, may introduce biases by overrepresenting certain demographic groups. Moreover, the training methodologies of LLMs, including fine-tuning processes, can exacerbate these biases, leading to outputs that may disproportionately favor affluent or Western populations while neglecting marginalized communities. By providing examples of biased outputs in oncology chatbots, the review highlights the ethical challenges LLMs present and the need for mitigation strategies. The study emphasizes integrating human-centric values into AI to mitigate these biases, ultimately advocating for the development of oncology chatbots that are aligned with ethical principles and capable of serving diverse patient populations equitably.
聊天机器人在肿瘤学中的应用凸显了对以人为本的人工智能(AI)的迫切需求,这种人工智能能够以同理心和精准度解决患者及其家属的担忧。以人为本的人工智能强调道德原则、同理心和以用户为中心的方法,确保技术与人类价值观和需求相一致。本综述批判性地审视了在肿瘤学聊天机器人中使用像GPT-3和GPT-4(OpenAI)这样的大语言模型(LLM)的伦理影响。它研究了这些模型如何复制类人语言模式,从而影响符合伦理的人工智能系统的设计。本文确定了符合伦理地开发肿瘤学聊天机器人的关键策略,重点关注来自广泛数据集和神经网络的潜在偏差。特定的数据集,比如那些主要源自西方医学文献和患者互动的数据集,可能会因过度代表某些人口群体而引入偏差。此外,大语言模型的训练方法,包括微调过程,可能会加剧这些偏差,导致输出结果可能不成比例地偏向富裕或西方人群,而忽视了边缘化社区。通过列举肿瘤学聊天机器人中有偏差输出的例子,本综述突出了大语言模型带来的伦理挑战以及制定缓解策略的必要性。该研究强调将以人为本的价值观融入人工智能以减轻这些偏差,最终倡导开发符合伦理原则且能够公平服务于不同患者群体的肿瘤学聊天机器人。