Suppr超能文献

患者结果在人工智能支持的决策中塑造道德责任方面的作用。

The role of patient outcomes in shaping moral responsibility in AI-supported decision making.

作者信息

Edwards C, Murphy A, Singh A, Daniel S, Chamunyonga C

机构信息

Queensland University of Technology, School of Clinical Sciences, Faculty of Health, Brisbane, QLD, Australia; Department of Medical Imaging, Redcliffe Hospital, Redcliffe, QLD, Australia.

Queensland University of Technology, School of Clinical Sciences, Faculty of Health, Brisbane, QLD, Australia; Medical Imaging and Nuclear Medicine, Children's Health Queensland Hospital and Health Service, South Brisbane, QLD, Australia; Department of Medical Imaging, Princess Alexandra Hospital, Woolloongabba, QLD, Australia.

出版信息

Radiography (Lond). 2025 May;31(3):102948. doi: 10.1016/j.radi.2025.102948. Epub 2025 Apr 13.

Abstract

INTRODUCTION

Integrating decision support mechanisms utilising artificial intelligence (AI) into medical radiation practice introduces unique challenges to accountability for patient care outcomes. AI systems, often seen as "black boxes," can obscure decision-making processes, raising concerns about practitioner responsibility, especially in adverse outcomes. This study examines how medical radiation practitioners perceive and attribute moral responsibility when interacting with AI-assisted decision-making tools.

METHODS

A cross-sectional online survey was conducted from September to December 2024, targeting international medical radiation practitioners. Participants were randomly assigned one of four profession-specific scenarios involving AI recommendations and patient outcomes. A 5-point Likert scale assessed the practitioner's perceptions of moral responsibility, and the responses were analysed using descriptive statistics, Kruskal-Wallis tests, and ordinal regression. Demographic and contextual factors were also evaluated.

RESULTS

649 radiographers, radiation therapists, nuclear medicine scientists, and sonographers provided complete responses. Most participants (49.8 %) had experience using AI in their current roles. Practitioners assigned higher moral responsibility to themselves in positive patient outcomes compared to negative ones (χ(1) = 18.98, p < 0.001). Prior knowledge of AI ethics and professional discipline significantly influenced responsibility ratings. While practitioners generally accepted responsibility, 33 % also attributed shared responsibility to AI developers and institutions.

CONCLUSION

Patient outcomes significantly influence perceptions of moral responsibility, with a shift toward shared accountability in adverse scenarios. Prior knowledge of AI ethics is crucial in shaping these perceptions, highlighting the need for targeted education.

IMPLICATIONS FOR PRACTICE

Understanding practitioner perceptions of accountability is critical for developing ethical frameworks, training programs, and shared responsibility models that ensure the safe integration of AI into clinical practice. Robust regulatory structures are necessary to address the unique challenges of AI-assisted decision-making.

摘要

引言

将利用人工智能(AI)的决策支持机制整合到医学放射实践中,给患者护理结果的问责制带来了独特的挑战。人工智能系统通常被视为“黑匣子”,可能会使决策过程变得模糊不清,引发对从业者责任的担忧,尤其是在出现不良结果时。本研究探讨医学放射从业者在与人工智能辅助决策工具交互时如何看待和归因道德责任。

方法

2024年9月至12月对国际医学放射从业者进行了一项横断面在线调查。参与者被随机分配到四个特定专业场景中的一个,这些场景涉及人工智能建议和患者结果。使用5点李克特量表评估从业者对道德责任的看法,并使用描述性统计、克鲁斯卡尔-沃利斯检验和有序回归分析这些回答。还评估了人口统计学和背景因素。

结果

649名放射技师、放射治疗师、核医学科学家和超声技师提供了完整的回答。大多数参与者(49.8%)在当前工作中使用过人工智能。与负面患者结果相比,从业者在正面患者结果中给自己赋予了更高的道德责任(χ(1)=18.98,p<0.001)。人工智能伦理和专业学科的先验知识显著影响责任评级。虽然从业者普遍接受责任,但33%的人也将共同责任归因于人工智能开发者和机构。

结论

患者结果显著影响道德责任的认知,在不良情况下会转向共同问责。人工智能伦理的先验知识对于塑造这些认知至关重要,凸显了针对性教育的必要性。

对实践的启示

了解从业者对问责制的看法对于制定道德框架、培训计划和共同责任模型至关重要,这些模型可确保人工智能安全地整合到临床实践中。强大的监管结构对于应对人工智能辅助决策的独特挑战是必要的。

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验