NorthShore University HealthSystem, Research Institute, Evanston, Illinois, USA.
Center for Computational Health, IBM T. J. Watson Research Center, Yorktown Heights, New York, USA.
J Am Med Inform Assoc. 2022 Mar 15;29(4):585-591. doi: 10.1093/jamia/ocac006.
Recent advances in the science and technology of artificial intelligence (AI) and growing numbers of deployed AI systems in healthcare and other services have called attention to the need for ethical principles and governance. We define and provide a rationale for principles that should guide the commission, creation, implementation, maintenance, and retirement of AI systems as a foundation for governance throughout the lifecycle. Some principles are derived from the familiar requirements of practice and research in medicine and healthcare: beneficence, nonmaleficence, autonomy, and justice come first. A set of principles follow from the creation and engineering of AI systems: explainability of the technology in plain terms; interpretability, that is, plausible reasoning for decisions; fairness and absence of bias; dependability, including "safe failure"; provision of an audit trail for decisions; and active management of the knowledge base to remain up to date and sensitive to any changes in the environment. In organizational terms, the principles require benevolence-aiming to do good through the use of AI; transparency, ensuring that all assumptions and potential conflicts of interest are declared; and accountability, including active oversight of AI systems and management of any risks that may arise. Particular attention is drawn to the case of vulnerable populations, where extreme care must be exercised. Finally, the principles emphasize the need for user education at all levels of engagement with AI and for continuing research into AI and its biomedical and healthcare applications.
人工智能(AI)科学技术的最新进展以及越来越多的 AI 系统在医疗保健和其他服务领域的部署,引起了人们对伦理原则和治理的关注。我们定义并提供了一些原则的基本原理,这些原则应该作为整个生命周期内 AI 系统治理的基础,指导 AI 系统的委托、创建、实施、维护和退役。一些原则源自医学和医疗保健实践和研究中的熟悉要求:首先是善行、不伤害、自主性和正义。从 AI 系统的创建和工程中可以得出一系列原则:用通俗易懂的语言解释技术;可解释性,即对决策进行合理推理;公平性和无偏见;可靠性,包括“安全故障”;提供决策的审计跟踪;以及主动管理知识库,以保持最新并对环境中的任何变化保持敏感。从组织的角度来看,这些原则要求仁慈——通过使用 AI 来做好事;透明度,确保声明所有假设和潜在的利益冲突;问责制,包括对 AI 系统的积极监督和管理可能出现的任何风险。特别提请注意弱势群体的情况,在这种情况下,必须格外小心。最后,这些原则强调了在与 AI 交互的各个层面上对用户进行教育的必要性,以及对 AI 及其生物医学和医疗保健应用的持续研究的必要性。