Aydin Serhat, Karabacak Mert, Vlachos Victoria, Margetis Konstantinos
School of Medicine, Koç University, Istanbul, Türkiye.
Department of Neurosurgery, Mount Sinai Health System, New York, NY, United States.
Front Med (Lausanne). 2025 Jan 23;12:1527864. doi: 10.3389/fmed.2025.1527864. eCollection 2025.
Large Language Models (LLMs) are transforming patient education in medication management by providing accessible information to support healthcare decision-making. Building on our recent scoping review of LLMs in patient education, this perspective examines their specific role in medication guidance. These artificial intelligence (AI)-driven tools can generate comprehensive responses about drug interactions, side effects, and emergency care protocols, potentially enhancing patient autonomy in medication decisions. However, significant challenges exist, including the risk of misinformation and the complexity of providing accurate drug information without access to individual patient data. Safety concerns are particularly acute when patients rely solely on AI-generated advice for self-medication decisions. This perspective analyzes current capabilities, examines critical limitations, and raises questions regarding the possible integration of LLMs in medication guidance. We emphasize the need for regulatory oversight to ensure these tools serve as supplements to, rather than replacements for, professional healthcare guidance.
大语言模型(LLMs)正在通过提供可获取的信息来支持医疗保健决策,从而改变药物管理方面的患者教育。基于我们最近对患者教育中LLMs的范围审查,本观点探讨了它们在药物指导中的具体作用。这些由人工智能(AI)驱动的工具可以生成关于药物相互作用、副作用和急救护理方案的全面回应,有可能增强患者在药物决策中的自主性。然而,存在重大挑战,包括错误信息的风险以及在无法获取个体患者数据的情况下提供准确药物信息的复杂性。当患者仅依靠人工智能生成的建议进行自我用药决策时,安全问题尤为突出。本观点分析了当前的能力,审视了关键局限性,并提出了关于LLMs在药物指导中可能整合的问题。我们强调需要监管监督,以确保这些工具作为专业医疗保健指导的补充,而非替代品。