Booz Allen Hamilton, Washington, DC, USA.
Departments of Biomedical Informatics, Biostatistics, and Medicine, Vanderbilt University Medical Center, Nashville, Tennessee, USA.
J Am Med Inform Assoc. 2021 Jul 14;28(7):1582-1590. doi: 10.1093/jamia/ocab065.
Artificial intelligence (AI) is critical to harnessing value from exponentially growing health and healthcare data. Expectations are high for AI solutions to effectively address current health challenges. However, there have been prior periods of enthusiasm for AI followed by periods of disillusionment, reduced investments, and progress, known as "AI Winters." We are now at risk of another AI Winter in health/healthcare due to increasing publicity of AI solutions that are not representing touted breakthroughs, and thereby decreasing trust of users in AI. In this article, we first highlight recently published literature on AI risks and mitigation strategies that would be relevant for groups considering designing, implementing, and promoting self-governance. We then describe a process for how a diverse group of stakeholders could develop and define standards for promoting trust, as well as AI risk-mitigating practices through greater industry self-governance. We also describe how adherence to such standards could be verified, specifically through certification/accreditation. Self-governance could be encouraged by governments to complement existing regulatory schema or legislative efforts to mitigate AI risks. Greater adoption of industry self-governance could fill a critical gap to construct a more comprehensive approach to the governance of AI solutions than US legislation/regulations currently encompass. In this more comprehensive approach, AI developers, AI users, and government/legislators all have critical roles to play to advance practices that maintain trust in AI and prevent another AI Winter.
人工智能(AI)对于从呈指数级增长的健康和医疗保健数据中挖掘价值至关重要。人们对 AI 解决方案能够有效应对当前健康挑战抱有很高的期望。然而,此前曾有过对 AI 的热情高涨时期,随后是失望时期、投资减少和进展缓慢的时期,被称为“AI 寒冬”。由于越来越多的 AI 解决方案宣传的并非其所吹捧的突破,从而降低了用户对 AI 的信任,我们现在面临着健康/医疗保健领域的另一场 AI 寒冬。在本文中,我们首先强调了最近发表的关于 AI 风险和缓解策略的文献,这些文献对于考虑设计、实施和推广自治的团体将是相关的。然后,我们描述了一个过程,说明如何通过更多的行业自治来制定和定义促进信任以及 AI 风险缓解实践的标准,该过程由不同利益相关者群体共同参与。我们还描述了如何通过认证/认可来验证对这些标准的遵守情况。政府可以鼓励自治,以补充现有的监管框架或立法工作,从而减轻 AI 风险。更广泛地采用行业自治可以填补一个关键的空白,构建一个比美国立法/法规目前涵盖的更全面的 AI 解决方案治理方法。在这种更全面的方法中,AI 开发人员、AI 用户和政府/立法者都需要发挥关键作用,以推进维护对 AI 信任并防止另一场 AI 寒冬的实践。