Division of Bioinformatics & Biostatistics, National Center for Toxicological Research, US Food and Drug Administration, Jefferson, Arkansas, USA.
Division of Imaging, Diagnostics, and Software Reliability, Office of Science and Engineering Laboratories, US Food and Drug Administration Center for Devices and Radiological Health, Silver Spring, Maryland, USA.
Clin Pharmacol Ther. 2024 Apr;115(4):687-697. doi: 10.1002/cpt.3117. Epub 2023 Dec 12.
Artificial intelligence (AI) is increasingly being used in decision making across various industries, including the public health arena. Bias in any decision-making process can significantly skew outcomes, and AI systems have been shown to exhibit biases at times. The potential for AI systems to perpetuate and even amplify biases is a growing concern. Bias, as used in this paper, refers to the tendency toward a particular characteristic or behavior, and thus, a biased AI system is one that shows biased associations entities. In this literature review, we examine the current state of research on AI bias, including its sources, as well as the methods for measuring, benchmarking, and mitigating it. We also examine the biases and methods of mitigation specifically relevant to the healthcare field and offer a perspective on bias measurement and mitigation in regulatory science decision making.
人工智能(AI)在包括公共卫生领域在内的各个行业的决策中越来越多地被使用。任何决策过程中的偏见都会严重扭曲结果,而且 AI 系统有时也会表现出偏见。AI 系统有可能延续甚至放大偏见,这是一个日益令人担忧的问题。在本文中,偏见是指对特定特征或行为的倾向,因此,有偏见的 AI 系统是指表现出偏见关联的系统。在这篇文献综述中,我们研究了 AI 偏见的当前研究状况,包括其来源,以及衡量、基准测试和缓解偏见的方法。我们还研究了与医疗保健领域特别相关的偏见和缓解方法,并对监管科学决策中的偏见衡量和缓解提供了一个视角。