• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相似文献

1
Moral Engagement and Disengagement in Health Care AI Development.医疗保健人工智能开发中的道德参与与脱离
AJOB Empir Bioeth. 2024 Oct-Dec;15(4):291-300. doi: 10.1080/23294515.2024.2336906. Epub 2024 Apr 8.
2
Not in my AI: Moral engagement and disengagement in health care AI development.非我所能:医疗人工智能发展中的道德介入和抽离
Pac Symp Biocomput. 2023;28:496-506.
3
Developer Perspectives on Potential Harms of Machine Learning Predictive Analytics in Health Care: Qualitative Analysis.开发者视角下医疗机器学习预测分析的潜在危害:定性分析
J Med Internet Res. 2023 Nov 16;25:e47609. doi: 10.2196/47609.
4
The moral responsibility of the hospital.医院的道德责任。
J Med Philos. 1982 Feb;7(1):87-100. doi: 10.1093/jmp/7.1.87.
5
Oxytocin enhances group-based guilt in high moral disengagement individuals through increased moral responsibility.催产素通过增加道德责任感来增强高道德脱离个体的基于群体的内疚感。
Psychoneuroendocrinology. 2024 Oct;168:107131. doi: 10.1016/j.psyneuen.2024.107131. Epub 2024 Jul 14.
6
The future of Cochrane Neonatal.考克兰新生儿协作网的未来。
Early Hum Dev. 2020 Nov;150:105191. doi: 10.1016/j.earlhumdev.2020.105191. Epub 2020 Sep 12.
7
Artificial Moral Responsibility: How We Can and Cannot Hold Machines Responsible.人工道德责任:我们能够且不能如何让机器负责。
Camb Q Healthc Ethics. 2021 Jul;30(3):435-447. doi: 10.1017/S0963180120000985.
8
Moral Disengagement in Social Work.社工的道德脱嵌
Soc Work. 2023 Jun 15;68(3):183-191. doi: 10.1093/sw/swad014.
9
Internalized public moral norms and shared sovereignty.内化的公共道德规范与共享主权。
Am J Bioeth. 2011 Jul;11(7):49-51. doi: 10.1080/15265161.2011.572492.
10
The Norms and Corporatization of Medicine Influence Physician Moral Distress in the United States.医学规范和公司化对美国医生道德困境的影响。
Teach Learn Med. 2023 Jun-Jul;35(3):335-345. doi: 10.1080/10401334.2022.2056740. Epub 2022 Apr 25.

引用本文的文献

1
An ecosystem approach to governing commercial actors in healthcare AI.一种治理医疗保健人工智能领域商业行为主体的生态系统方法。
Policy Stud. 2025 Apr 28. doi: 10.1080/01442872.2025.2497539.

本文引用的文献

1
AI in the hands of imperfect users.不完美的用户手中的人工智能。
NPJ Digit Med. 2022 Dec 28;5(1):197. doi: 10.1038/s41746-022-00737-z.
2
Artificial intelligence in medicine: Overcoming or recapitulating structural challenges to improving patient care?人工智能在医学领域:克服还是再现改善患者护理的结构性挑战?
Cell Rep Med. 2022 May 17;3(5):100622. doi: 10.1016/j.xcrm.2022.100622. Epub 2022 Apr 27.
3
Ethical Machine Learning in Healthcare.医疗保健中的伦理机器学习。
Annu Rev Biomed Data Sci. 2021 Jul;4:123-144. doi: 10.1146/annurev-biodatasci-092820-114757. Epub 2021 May 6.
4
A Typology of Existing Machine Learning-Based Predictive Analytic Tools Focused on Reducing Costs and Improving Quality in Health Care: Systematic Search and Content Analysis.基于机器学习的预测分析工具的分类学研究,旨在降低医疗成本和提高医疗质量:系统检索和内容分析。
J Med Internet Res. 2021 Jun 22;23(6):e26391. doi: 10.2196/26391.
5
Toward a Psychology of Human Agency: Pathways and Reflections.迈向人类能动性心理学:路径与反思。
Perspect Psychol Sci. 2018 Mar;13(2):130-136. doi: 10.1177/1745691617699280.
6
Implementing Machine Learning in Health Care - Addressing Ethical Challenges.在医疗保健中实施机器学习——应对伦理挑战。
N Engl J Med. 2018 Mar 15;378(11):981-983. doi: 10.1056/NEJMp1714229.
7
Moral disengagement in the corporate world.企业界的道德推脱。
Account Res. 2009 Jan-Mar;16(1):41-74. doi: 10.1080/08989620802689847.
8
Informatics and professional responsibility.信息学与职业责任。
Sci Eng Ethics. 2001 Apr;7(2):221-30. doi: 10.1007/s11948-001-0043-5.

医疗保健人工智能开发中的道德参与与脱离

Moral Engagement and Disengagement in Health Care AI Development.

作者信息

Nichol Ariadne A, Halley Meghan, Federico Carole, Cho Mildred K, Sankar Pamela L

机构信息

Center for Biomedical Ethics, Stanford University School of Medicine, Stanford, California, USA.

Department of Medical Ethics & Health Policy, University of Pennsylvania, Philadelphia, Pennsylvania, USA.

出版信息

AJOB Empir Bioeth. 2024 Oct-Dec;15(4):291-300. doi: 10.1080/23294515.2024.2336906. Epub 2024 Apr 8.

DOI:10.1080/23294515.2024.2336906
PMID:38588388
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11458830/
Abstract

BACKGROUND

Machine learning (ML) is utilized increasingly in health care, and can pose harms to patients, clinicians, health systems, and the public. In response, regulators have proposed an approach that would shift more responsibility to ML developers for mitigating potential harms. To be effective, this approach requires ML developers to recognize, accept, and act on responsibility for mitigating harms. However, little is known regarding the perspectives of developers themselves regarding their obligations to mitigate harms.

METHODS

We conducted 40 semi-structured interviews with developers of ML predictive analytics applications for health care in the United States.

RESULTS

Participants varied widely in their perspectives on personal responsibility and included examples of both moral engagement and disengagement, albeit in a variety of forms. While most (70%) of participants made a statement indicative of moral engagement, most of these statements reflected an awareness of moral issues, while only a subset of these included additional elements of engagement such as recognizing responsibility, alignment with personal values, addressing conflicts of interests, and opportunities for action. Further, we identified eight distinct categories of moral disengagement reflecting efforts to minimize potential harms or deflect personal responsibility for preventing or mitigating harms.

CONCLUSIONS

These findings suggest possible facilitators and barriers to the development of ethical ML that could act by encouraging moral engagement or discouraging moral disengagement. Regulatory approaches that depend on the ability of ML developers to recognize, accept, and act on responsibility for mitigating harms might have limited success without education and guidance for ML developers about the extent of their responsibilities and how to implement them.

摘要

背景

机器学习(ML)在医疗保健领域的应用日益广泛,且可能对患者、临床医生、医疗系统及公众造成危害。对此,监管机构提出了一种方法,即让机器学习开发者承担更多减轻潜在危害的责任。要使这种方法有效,就要求机器学习开发者认识到、接受并履行减轻危害的责任。然而,对于开发者自身对减轻危害义务的看法,我们却知之甚少。

方法

我们对美国医疗保健领域机器学习预测分析应用的开发者进行了40次半结构化访谈。

结果

参与者对个人责任的看法差异很大,既有道德参与的例子,也有道德脱离的例子,尽管形式多样。虽然大多数(70%)参与者做出了表明道德参与的陈述,但这些陈述大多仅反映了对道德问题的认识,只有一部分陈述包含了额外的参与要素,如认识到责任、与个人价值观保持一致、解决利益冲突以及采取行动的机会。此外,我们还确定了八类不同的道德脱离,反映出他们试图尽量减少潜在危害或推卸预防或减轻危害的个人责任。

结论

这些发现表明了可能促进或阻碍符合伦理的机器学习发展的因素,它们可能通过鼓励道德参与或抑制道德脱离来发挥作用。如果没有对机器学习开发者进行关于其责任范围及如何履行责任的教育和指导,依赖机器学习开发者认识到、接受并履行减轻危害责任能力的监管方法可能成效有限。