Kostick-Quenet Kristin M, Gerke Sara
Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, TX, USA.
Penn State Dickinson Law, Carlisle, PA, USA.
NPJ Digit Med. 2022 Dec 28;5(1):197. doi: 10.1038/s41746-022-00737-z.
As the use of artificial intelligence and machine learning (AI/ML) continues to expand in healthcare, much attention has been given to mitigating bias in algorithms to ensure they are employed fairly and transparently. Less attention has fallen to addressing potential bias among AI/ML's human users or factors that influence user reliance. We argue for a systematic approach to identifying the existence and impacts of user biases while using AI/ML tools and call for the development of embedded interface design features, drawing on insights from decision science and behavioral economics, to nudge users towards more critical and reflective decision making using AI/ML.
随着人工智能和机器学习(AI/ML)在医疗保健领域的应用不断扩展,人们已高度关注减轻算法偏差,以确保它们能公正、透明地应用。而较少有人关注解决AI/ML人类用户中的潜在偏差或影响用户依赖的因素。我们主张采用一种系统方法来识别在使用AI/ML工具时用户偏差的存在及影响,并呼吁借鉴决策科学和行为经济学的见解,开发嵌入式界面设计功能,以促使用户在使用AI/ML时做出更具批判性和反思性的决策。
NPJ Digit Med. 2022-12-28
NPJ Digit Med. 2023-8-25
Front Artif Intell. 2022-10-11
Expert Rev Pharmacoecon Outcomes Res. 2024-1
Radiographics. 2024-5
NPJ Digit Med. 2025-8-29
NPJ Digit Med. 2025-6-5
Eur J Anaesthesiol Intensive Care. 2023-8-10
NPJ Digit Med. 2025-1-7
Am J Bioeth. 2022-7
J Law Med Ethics. 2022
Science. 2022-1-21
Science. 2021-7-16
Med Decis Making. 2021-10