Department of Statistics, Seoul National University, Seoul, 08826, Republic of Korea.
Department of Statistics, Inha University, Incheon, 22212, Republic of Korea.
Neural Netw. 2022 Oct;154:441-454. doi: 10.1016/j.neunet.2022.07.027. Epub 2022 Jul 30.
As they take a crucial role in social decision makings, AI algorithms based on ML models should be not only accurate but also fair. Among many algorithms for fair AI, learning a prediction ML model by minimizing the empirical risk (e.g., cross-entropy) subject to a given fairness constraint has received much attention. To avoid computational difficulty, however, a given fairness constraint is replaced by a surrogate fairness constraint as the 0-1 loss is replaced by a convex surrogate loss for classification problems. In this paper, we investigate the validity of existing surrogate fairness constraints and propose a new surrogate fairness constraint called SLIDE, which is computationally feasible and asymptotically valid in the sense that the learned model satisfies the fairness constraint asymptotically and achieves a fast convergence rate. Numerical experiments confirm that the SLIDE works well for various benchmark datasets.
由于机器学习模型在社会决策中起着至关重要的作用,基于机器学习模型的人工智能算法不仅应该准确,而且应该公平。在许多公平人工智能算法中,通过最小化经验风险(例如交叉熵)并满足给定的公平性约束来学习预测机器学习模型已经受到了广泛关注。然而,为了避免计算困难,给定的公平性约束通常会被一个替代的公平性约束所替代,因为对于分类问题,0-1 损失会被一个凸替代损失所替代。在本文中,我们研究了现有的替代公平性约束的有效性,并提出了一种新的替代公平性约束,称为 SLIDE,它在计算上是可行的,并且在渐近意义上是有效的,即学习到的模型在渐近意义上满足公平性约束,并具有快速的收敛速度。数值实验证实了 SLIDE 在各种基准数据集上的良好性能。