García Abejas Abel, Geraldes Santos David, Leite Costa Fabio, Cordero Botejara Aida, Mota-Filipe Helder, Salvador Vergés Àngels
Faculty of Health Sciences, University of Beira Interior, Lisbon, Portugal.
Palliative care, Hospital Lusíadas Lisboa, Lisbon, Portugal.
Interact J Med Res. 2025 May 14;14:e73517. doi: 10.2196/73517.
Artificial intelligence (AI) is increasingly integrated into palliative medicine, offering opportunities to improve quality, efficiency, and patient-centeredness in end-of-life care. However, its use raises complex ethical issues, including privacy, equity, dehumanization, and decision-making dilemmas.
We aim to critically analyze the main ethical implications of AI in end-of-life palliative care and examine the benefits and risks. We propose strategies for ethical and responsible implementation.
We conducted an integrative review of studies published from 2020 to 2025 in English, Portuguese, and Spanish, identified through systematic searches in PubMed, Scopus, and Google Scholar. Inclusion criteria were studies addressing AI in palliative medicine focusing on ethical implications or patient experience. Two reviewers independently performed study selection and data extraction, resolving discrepancies by consensus. The quality of the papers was assessed using the Critical Appraisal Skills Programme checklist and the Hawker et al tool.
Six key themes emerged: (1) practical applications of AI, (2) communication and AI tools, (3) patient experience and humanization, (4) ethical implications, (5) quality of life perspectives, and (6) challenges and limitations. While AI shows promise for improving efficiency and personalization, consolidated real-world examples of efficiency and equity remain scarce. Key risks include algorithmic bias, cultural insensitivity, and the potential for reduced patient autonomy.
AI can transform palliative care, but implementation must be patient-centered and ethically grounded. Robust policies are needed to ensure equity, privacy, and humanization. Future research should address data diversity, social determinants, and culturally sensitive approaches.
人工智能(AI)越来越多地融入姑息医学,为改善临终关怀的质量、效率和以患者为中心的程度提供了机会。然而,其应用引发了复杂的伦理问题,包括隐私、公平、非人性化和决策困境。
我们旨在批判性地分析人工智能在临终姑息治疗中的主要伦理影响,并审视其益处和风险。我们提出了符合伦理且负责任的实施策略。
我们对2020年至2025年以英文、葡萄牙文和西班牙文发表的研究进行了综合综述,这些研究是通过在PubMed、Scopus和谷歌学术上进行系统检索确定的。纳入标准是关注伦理影响或患者体验的姑息医学中涉及人工智能的研究。两名评审员独立进行研究筛选和数据提取,通过协商解决分歧。使用批判性评估技能计划清单和霍克等人的工具评估论文质量。
出现了六个关键主题:(1)人工智能的实际应用,(2)沟通与人工智能工具,(3)患者体验与人道化,(4)伦理影响,(5)生活质量视角,以及(6)挑战与局限。虽然人工智能有望提高效率和个性化,但效率和公平性方面可靠的现实世界实例仍然很少。关键风险包括算法偏差、文化不敏感以及患者自主权降低的可能性。
人工智能可以改变姑息治疗,但实施必须以患者为中心且基于伦理。需要强有力的政策来确保公平、隐私和人性化。未来的研究应关注数据多样性、社会决定因素和文化敏感方法。