Strotherm Janine, Ashraf Inaam, Hammer Barbara
Center for Cognitive Interaction Technology, Universität Bielefeld, Bielefeld, North Rhine-Westphalia, Germany.
PeerJ Comput Sci. 2024 Sep 30;10:e2317. doi: 10.7717/peerj-cs.2317. eCollection 2024.
Especially if artificial intelligence (AI)-supported decisions affect the society, the fairness of such AI-based methodologies constitutes an important area of research. In this contribution, we investigate the applications of AI to the socioeconomically relevant infrastructure of water distribution systems (WDSs). We propose an appropriate definition of protected groups in WDSs and generalized definitions of group fairness, applicable even to multiple non-binary sensitive features, that provably coincide with existing definitions for a single binary sensitive feature. We demonstrate that typical methods for the detection of leakages in WDSs are unfair in this sense. Further, we thus propose a general fairness-enhancing framework as an extension of the specific leakage detection pipeline, but also for an arbitrary learning scheme, to increase the fairness of the AI-based algorithm. Finally, we evaluate and compare several specific instantiations of this framework on a toy and on a realistic WDS to show their utility.
特别是当人工智能(AI)支持的决策影响到社会时,此类基于AI的方法的公平性构成了一个重要的研究领域。在本论文中,我们研究了AI在与社会经济相关的供水系统(WDS)基础设施中的应用。我们提出了WDS中受保护群体的适当定义以及群体公平性的广义定义,这些定义甚至适用于多个非二元敏感特征,并且可证明与单个二元敏感特征的现有定义一致。我们证明,从这个意义上讲,WDS中典型的漏水检测方法是不公平的。此外,我们因此提出了一个通用的公平性增强框架,作为特定漏水检测流程的扩展,但也适用于任意学习方案,以提高基于AI的算法的公平性。最后,我们在一个简化模型和一个实际的WDS上评估并比较了该框架的几种具体实例,以展示它们的实用性。