Lawson McLean Aaron, Hristidis Vagelis
Department of Neurosurgery, Jena University Hospital - Friedrich Schiller University Jena, Am Klinikum 1, 07747, Jena, Germany.
Comprehensive Cancer Center Central Germany, Jena, Germany.
J Cancer Educ. 2025 Feb 18. doi: 10.1007/s13187-025-02592-4.
The rapid integration of AI-driven chatbots into oncology education represents both a transformative opportunity and a critical challenge. These systems, powered by advanced language models, can deliver personalized, real-time cancer information to patients, caregivers, and clinicians, bridging gaps in access and availability. However, their ability to convincingly mimic human-like conversation raises pressing concerns regarding misinformation, trust, and their overall effectiveness in digital health communication. This review examines the dual-edged role of AI chatbots, exploring their capacity to support patient education and alleviate clinical burdens, while highlighting the risks of lack of or inadequate algorithmic opacity (i.e., the inability to see the data and reasoning used to make a decision, which hinders appropriate future action), false information, and the ethical dilemmas posed by human-seeming AI entities. Strategies to mitigate these risks include robust oversight, transparent algorithmic development, and alignment with evidence-based oncology protocols. Ultimately, the responsible deployment of AI chatbots requires a commitment to safeguarding the core values of evidence-based practice, patient trust, and human-centered care.
人工智能驱动的聊天机器人迅速融入肿瘤学教育,这既是一个变革性机遇,也是一项严峻挑战。这些由先进语言模型驱动的系统,可以为患者、护理人员和临床医生提供个性化的实时癌症信息,弥合获取信息方面的差距。然而,它们令人信服地模仿人类对话的能力引发了人们对错误信息、信任以及它们在数字健康通信中的整体有效性的紧迫担忧。本综述探讨了人工智能聊天机器人的双刃剑作用,探究它们支持患者教育和减轻临床负担的能力,同时强调算法不透明(即无法看到用于做出决策的数据和推理,这会阻碍未来采取适当行动)、虚假信息以及看似人类的人工智能实体带来的伦理困境等风险。减轻这些风险的策略包括强有力的监督、透明的算法开发以及与循证肿瘤学方案保持一致。最终,负责任地部署人工智能聊天机器人需要致力于维护循证实践、患者信任和以患者为中心的护理等核心价值观。