University of Notre Dame, Notre Dame, USA.
IQS School of Management, Universitat Ramon Llull, Barcelona, Spain.
Sci Eng Ethics. 2023 Jul 24;29(4):29. doi: 10.1007/s11948-023-00448-y.
This paper clarifies why bias cannot be completely mitigated in Machine Learning (ML) and proposes an end-to-end methodology to translate the ethical principle of justice and fairness into the practice of ML development as an ongoing agreement with stakeholders. The pro-ethical iterative process presented in the paper aims to challenge asymmetric power dynamics in the fairness decision making within ML design and support ML development teams to identify, mitigate and monitor bias at each step of ML systems development. The process also provides guidance on how to explain the always imperfect trade-offs in terms of bias to users.
本文阐明了为什么机器学习(ML)中无法完全消除偏见,并提出了一种端到端的方法,将公正和公平的伦理原则转化为 ML 开发实践,作为与利益相关者的持续协议。本文提出的符合伦理道德的迭代过程旨在挑战 ML 设计中公平决策中的不对称权力动态,并支持 ML 开发团队在 ML 系统开发的每个步骤中识别、减轻和监控偏见。该过程还提供了如何就偏见问题向用户解释总是存在不完美权衡的指导。