Pham Tuan
Barts and The London School of Medicine and Dentistry, London, UK.
R Soc Open Sci. 2025 May 14;12(5):241873. doi: 10.1098/rsos.241873. eCollection 2025 May.
Artificial intelligence (AI) is transforming healthcare by enhancing diagnostics, personalizing medicine and improving surgical precision. However, its integration into healthcare systems raises significant ethical and legal challenges. This review explores key ethical principles-autonomy, beneficence, non-maleficence, justice, transparency and accountability-highlighting their relevance in AI-driven decision-making. Legal challenges, including data privacy and security, liability for AI errors, regulatory approval processes, intellectual property and cross-border regulations, are also addressed. As AI systems become increasingly autonomous, questions of responsibility and fairness must be carefully considered, particularly with the potential for biased algorithms to amplify healthcare disparities. This paper underscores the importance of multi-disciplinary collaboration between technologists, healthcare providers, legal experts and policymakers to create adaptive, globally harmonized frameworks. Public engagement is emphasized as essential for fostering trust and ensuring ethical AI adoption. With AI technologies advancing rapidly, a flexible regulatory environment that evolves with innovation is critical. Aligning AI innovation with ethical and legal imperatives will lead to a safer, more equitable healthcare system for all.
人工智能(AI)正在通过增强诊断、实现医学个性化和提高手术精度来改变医疗保健。然而,将其整合到医疗系统中会带来重大的伦理和法律挑战。本综述探讨了关键的伦理原则——自主性、 beneficence、不伤害、正义、透明度和问责制——强调它们在人工智能驱动的决策中的相关性。还讨论了法律挑战,包括数据隐私和安全、人工智能错误的责任、监管审批程序、知识产权和跨境法规。随着人工智能系统变得越来越自主,必须仔细考虑责任和公平性问题,特别是考虑到有偏差的算法可能加剧医疗保健差距。本文强调了技术专家、医疗保健提供者、法律专家和政策制定者之间多学科合作的重要性,以创建适应性强、全球统一的框架。公众参与被视为建立信任和确保符合伦理地采用人工智能的关键。随着人工智能技术的迅速发展,一个与创新同步发展的灵活监管环境至关重要。使人工智能创新符合伦理和法律要求将为所有人带来更安全、更公平的医疗系统。 (注:原文中“beneficence”未翻译是因为其准确中文释义需结合具体语境,此处保留英文更合适,或者可译为“善行”等,你可根据实际情况调整。)