Hou Bojian, Mondragón Andrés, Tarzanagh Davoud Ataee, Zhou Zhuoping, Saykin Andrew J, Moore Jason H, Ritchie Marylyn D, Long Qi, Shen Li
University of Pennsylvania, Philadelphia, PA.
Indiana University, Indianapolis, IN.
AMIA Jt Summits Transl Sci Proc. 2024 May 31;2024:211-220. eCollection 2024.
Fairness is crucial in machine learning to prevent bias based on sensitive attributes in classifier predictions. However, the pursuit of strict fairness often sacrifices accuracy, particularly when significant prevalence disparities exist among groups, making classifiers less practical. For example, Alzheimer's disease (AD) is more prevalent in women than men, making equal treatment inequitable for females. Accounting for prevalence ratios among groups is essential for fair decision-making. In this paper, we introduce prior knowledge for fairness, which incorporates prevalence ratio information into the fairness constraint within the Empirical Risk Minimization (ERM) framework. We develop the Prior-knowledge-guided Fair ERM (PFERM) framework, aiming to minimize expected risk within a specified function class while adhering to a prior-knowledge-guided fairness constraint. This approach strikes a flexible balance between accuracy and fairness. Empirical results confirm its effectiveness in preserving fairness without compromising accuracy.
公平性在机器学习中至关重要,可防止分类器预测中基于敏感属性的偏差。然而,追求严格的公平性往往会牺牲准确性,特别是当不同群体之间存在显著的患病率差异时,这使得分类器的实用性降低。例如,阿尔茨海默病(AD)在女性中比男性更普遍,这使得对女性的平等对待变得不公平。考虑群体之间的患病率比率对于公平决策至关重要。在本文中,我们引入了公平性的先验知识,将患病率比率信息纳入经验风险最小化(ERM)框架内的公平性约束中。我们开发了先验知识引导的公平ERM(PFERM)框架,旨在在特定函数类中最小化预期风险,同时遵循先验知识引导的公平性约束。这种方法在准确性和公平性之间实现了灵活的平衡。实证结果证实了其在不影响准确性的情况下保持公平性的有效性。