• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

患者结果在人工智能支持的决策中塑造道德责任方面的作用。

The role of patient outcomes in shaping moral responsibility in AI-supported decision making.

作者信息

Edwards C, Murphy A, Singh A, Daniel S, Chamunyonga C

机构信息

Queensland University of Technology, School of Clinical Sciences, Faculty of Health, Brisbane, QLD, Australia; Department of Medical Imaging, Redcliffe Hospital, Redcliffe, QLD, Australia.

Queensland University of Technology, School of Clinical Sciences, Faculty of Health, Brisbane, QLD, Australia; Medical Imaging and Nuclear Medicine, Children's Health Queensland Hospital and Health Service, South Brisbane, QLD, Australia; Department of Medical Imaging, Princess Alexandra Hospital, Woolloongabba, QLD, Australia.

出版信息

Radiography (Lond). 2025 May;31(3):102948. doi: 10.1016/j.radi.2025.102948. Epub 2025 Apr 13.

DOI:10.1016/j.radi.2025.102948
PMID:40228324
Abstract

INTRODUCTION

Integrating decision support mechanisms utilising artificial intelligence (AI) into medical radiation practice introduces unique challenges to accountability for patient care outcomes. AI systems, often seen as "black boxes," can obscure decision-making processes, raising concerns about practitioner responsibility, especially in adverse outcomes. This study examines how medical radiation practitioners perceive and attribute moral responsibility when interacting with AI-assisted decision-making tools.

METHODS

A cross-sectional online survey was conducted from September to December 2024, targeting international medical radiation practitioners. Participants were randomly assigned one of four profession-specific scenarios involving AI recommendations and patient outcomes. A 5-point Likert scale assessed the practitioner's perceptions of moral responsibility, and the responses were analysed using descriptive statistics, Kruskal-Wallis tests, and ordinal regression. Demographic and contextual factors were also evaluated.

RESULTS

649 radiographers, radiation therapists, nuclear medicine scientists, and sonographers provided complete responses. Most participants (49.8 %) had experience using AI in their current roles. Practitioners assigned higher moral responsibility to themselves in positive patient outcomes compared to negative ones (χ(1) = 18.98, p < 0.001). Prior knowledge of AI ethics and professional discipline significantly influenced responsibility ratings. While practitioners generally accepted responsibility, 33 % also attributed shared responsibility to AI developers and institutions.

CONCLUSION

Patient outcomes significantly influence perceptions of moral responsibility, with a shift toward shared accountability in adverse scenarios. Prior knowledge of AI ethics is crucial in shaping these perceptions, highlighting the need for targeted education.

IMPLICATIONS FOR PRACTICE

Understanding practitioner perceptions of accountability is critical for developing ethical frameworks, training programs, and shared responsibility models that ensure the safe integration of AI into clinical practice. Robust regulatory structures are necessary to address the unique challenges of AI-assisted decision-making.

摘要

引言

将利用人工智能(AI)的决策支持机制整合到医学放射实践中,给患者护理结果的问责制带来了独特的挑战。人工智能系统通常被视为“黑匣子”,可能会使决策过程变得模糊不清,引发对从业者责任的担忧,尤其是在出现不良结果时。本研究探讨医学放射从业者在与人工智能辅助决策工具交互时如何看待和归因道德责任。

方法

2024年9月至12月对国际医学放射从业者进行了一项横断面在线调查。参与者被随机分配到四个特定专业场景中的一个,这些场景涉及人工智能建议和患者结果。使用5点李克特量表评估从业者对道德责任的看法,并使用描述性统计、克鲁斯卡尔-沃利斯检验和有序回归分析这些回答。还评估了人口统计学和背景因素。

结果

649名放射技师、放射治疗师、核医学科学家和超声技师提供了完整的回答。大多数参与者(49.8%)在当前工作中使用过人工智能。与负面患者结果相比,从业者在正面患者结果中给自己赋予了更高的道德责任(χ(1)=18.98,p<0.001)。人工智能伦理和专业学科的先验知识显著影响责任评级。虽然从业者普遍接受责任,但33%的人也将共同责任归因于人工智能开发者和机构。

结论

患者结果显著影响道德责任的认知,在不良情况下会转向共同问责。人工智能伦理的先验知识对于塑造这些认知至关重要,凸显了针对性教育的必要性。

对实践的启示

了解从业者对问责制的看法对于制定道德框架、培训计划和共同责任模型至关重要,这些模型可确保人工智能安全地整合到临床实践中。强大的监管结构对于应对人工智能辅助决策的独特挑战是必要的。

相似文献

1
The role of patient outcomes in shaping moral responsibility in AI-supported decision making.患者结果在人工智能支持的决策中塑造道德责任方面的作用。
Radiography (Lond). 2025 May;31(3):102948. doi: 10.1016/j.radi.2025.102948. Epub 2025 Apr 13.
2
Navigating the ethical landscape of artificial intelligence in radiography: a cross-sectional study of radiographers' perspectives.医学影像学中人工智能伦理问题的探索:放射技师观点的横断面研究。
BMC Med Ethics. 2024 May 11;25(1):52. doi: 10.1186/s12910-024-01052-w.
3
Future Use of AI in Diagnostic Medicine: 2-Wave Cross-Sectional Survey Study.人工智能在诊断医学中的未来应用:两波横断面调查研究。
J Med Internet Res. 2025 Feb 27;27:e53892. doi: 10.2196/53892.
4
Influence of AI behavior on human moral decisions, agency, and responsibility.人工智能行为对人类道德决策、能动性和责任的影响。
Sci Rep. 2025 Apr 10;15(1):12329. doi: 10.1038/s41598-025-95587-6.
5
Artificial Intelligence to support ethical decision-making for incapacitated patients: a survey among German anesthesiologists and internists.人工智能支持失能患者的伦理决策:德国麻醉师和内科医生的调查。
BMC Med Ethics. 2024 Jul 18;25(1):78. doi: 10.1186/s12910-024-01079-z.
6
Expectations of Intensive Care Physicians Regarding an AI-Based Decision Support System for Weaning From Continuous Renal Replacement Therapy: Predevelopment Survey Study.重症监护医师对基于人工智能的持续肾脏替代治疗撤机决策支持系统的期望:开发前调查研究
JMIR Med Inform. 2025 Apr 23;13:e63709. doi: 10.2196/63709.
7
Artificial intelligence in medical education - perception among medical students.人工智能在医学教育中的应用——医学生的认知。
BMC Med Educ. 2024 Jul 27;24(1):804. doi: 10.1186/s12909-024-05760-0.
8
Radiologists' perceptions on AI integration: An in-depth survey study.放射科医生对人工智能整合的看法:一项深入的调查研究。
Eur J Radiol. 2024 Aug;177:111590. doi: 10.1016/j.ejrad.2024.111590. Epub 2024 Jun 27.
9
Ethical implications of AI-driven clinical decision support systems on healthcare resource allocation: a qualitative study of healthcare professionals' perspectives.人工智能驱动的临床决策支持系统对医疗资源分配的伦理影响:一项关于医疗专业人员观点的定性研究
BMC Med Ethics. 2024 Dec 21;25(1):148. doi: 10.1186/s12910-024-01151-8.
10
AI-Assisted Decision-Making in Long-Term Care: Qualitative Study on Prerequisites for Responsible Innovation.人工智能在长期护理中的决策辅助:负责任创新的先决条件定性研究。
JMIR Nurs. 2024 Jul 25;7:e55962. doi: 10.2196/55962.