Alanazi Abdullah
Health Informatics Department, King Saud Ibn Abdulaziz University for Health Sciences, Riyadh 11481, Saudi Arabia.
King Abdullah International Medical Research Center, Riyadh 14611, Saudi Arabia.
Healthcare (Basel). 2025 Jun 21;13(13):1487. doi: 10.3390/healthcare13131487.
The rapid integration of artificial intelligence (AI) technologies into healthcare systems presents new opportunities and challenges, particularly regarding legal and ethical implications. In Saudi Arabia, the lack of legal awareness could hinder safe implementation of AI tools. A sequential explanatory mixed-methods design was employed. In Phase One, a structured electronic survey was administered to 357 clinicians across public and private healthcare institutions in Saudi Arabia, assessing legal awareness, liability concerns, data privacy, and trust in AI. In Phase Two, a qualitative expert panel involving health law specialists, digital health advisors, and clinicians was conducted to interpret survey findings and identify key regulatory needs. Only 7% of clinicians reported high familiarity with AI legal implications, and 89% had no formal legal training. Confidence in AI compliance with data laws was low (mean score: 1.40/3). Statistically significant associations were found between professional role and legal familiarity (χ = 18.6, < 0.01), and between legal training and confidence in AI compliance (t ≈ 6.1, < 0.001). Qualitative findings highlighted six core legal barriers including lack of training, unclear liability, and gaps in regulatory alignment with national laws like the Personal Data Protection Law (PDPL). The study highlights a major gap in legal readiness among Saudi clinicians, which affects patient safety, liability, and trust in AI. Although clinicians are open to using AI, unclear regulations pose barriers to safe adoption. Experts call for national legal standards, mandatory training, and informed consent protocols. A clear legal framework and clinician education are crucial for the ethical and effective use of AI in healthcare.
人工智能(AI)技术迅速融入医疗系统带来了新的机遇和挑战,特别是在法律和伦理方面。在沙特阿拉伯,缺乏法律意识可能会阻碍人工智能工具的安全实施。本研究采用了顺序解释性混合方法设计。在第一阶段,对沙特阿拉伯公立和私立医疗机构的357名临床医生进行了结构化电子调查,评估他们的法律意识、责任担忧、数据隐私以及对人工智能的信任度。在第二阶段,组织了一个由卫生法专家、数字健康顾问和临床医生组成的定性专家小组,以解释调查结果并确定关键的监管需求。只有7%的临床医生表示对人工智能的法律影响非常熟悉,89%的临床医生没有接受过正规的法律培训。对人工智能符合数据法律的信心较低(平均得分:1.40/3)。研究发现专业角色与法律熟悉程度之间存在统计学上的显著关联(χ = 18.6,< 0.01),法律培训与对人工智能合规的信心之间也存在显著关联(t ≈ 6.1,< 0.001)。定性研究结果突出了六个核心法律障碍,包括缺乏培训、责任不明确以及与《个人数据保护法》(PDPL)等国家法律在监管方面的差距。该研究凸显了沙特临床医生在法律准备方面的重大差距,这影响到患者安全、责任以及对人工智能的信任。尽管临床医生愿意使用人工智能,但不明确的法规对安全采用构成了障碍。专家呼吁制定国家法律标准、强制培训和知情同意协议。一个明确的法律框架和临床医生教育对于在医疗保健中道德且有效地使用人工智能至关重要。