Zhang Guixian, Yuan Guan, Cheng Debo, Liu Lin, Li Jiuyong, Zhang Shichao
School of Computer Science and Technology, China University of Mining and Technology, Xuzhou, Jiangsu, 221116, China; Mine Digitization Engineering Research Center of the Ministry of Education, China University of Mining and Technology, Xuzhou, Jiangsu, 221116, China; Artificial Intelligence Research Institute, China University of Mining and Technology, Xuzhou, Jiangsu, 221116, China.
School of Computer Science and Technology, China University of Mining and Technology, Xuzhou, Jiangsu, 221116, China; Mine Digitization Engineering Research Center of the Ministry of Education, China University of Mining and Technology, Xuzhou, Jiangsu, 221116, China; Artificial Intelligence Research Institute, China University of Mining and Technology, Xuzhou, Jiangsu, 221116, China.
Neural Netw. 2025 Jan;181:106781. doi: 10.1016/j.neunet.2024.106781. Epub 2024 Oct 5.
Graph Neural Networks (GNNs) play a key role in efficiently learning node representations of graph-structured data through message passing, but their predictions are often correlated with sensitive attributes and thus lead to potential discrimination against some groups. Given the increasingly widespread applications of GNNs, solutions are required urgently to prevent algorithmic discrimination associated with GNNs, to protect the rights of vulnerable groups and to build trustworthy artificial intelligence. To learn the fair node representations of graphs, we propose a novel framework, the Fair Disentangled Graph Neural Network (FDGNN). With the proposed FDGNN framework, we enhance data diversity by generating instances that have identical sensitivity values but different adjacency matrices through data augmentation. Additionally, we design a counterfactual augmentation strategy for constructing instances with varying sensitive values while preserving the same adjacency matrices, thereby balancing the distribution of sensitive values across different groups. Subsequently, we employ a disentangled contrastive learning strategy to acquire disentangled representations of non-sensitive attributes such that sensitive information does not affect the prediction of node information. Finally, the learned fair representations of non-sensitive attributes are employed for building a fair predictive model. Extensive experiments on three real-world datasets demonstrate that FDGNN yields the best fairness predictions compared to the baseline methods. Additionally, the results demonstrate the potential of disentanglement in learning fair representations.
图神经网络(GNNs)在通过消息传递高效学习图结构数据的节点表示方面发挥着关键作用,但其预测结果往往与敏感属性相关,从而导致对某些群体的潜在歧视。鉴于GNNs的应用日益广泛,迫切需要解决方案来防止与GNNs相关的算法歧视,保护弱势群体的权利,并构建可信赖的人工智能。为了学习图的公平节点表示,我们提出了一种新颖的框架——公平解缠图神经网络(FDGNN)。通过所提出的FDGNN框架,我们通过数据增强生成具有相同敏感值但不同邻接矩阵的实例,从而增强数据多样性。此外,我们设计了一种反事实增强策略,用于构建具有不同敏感值但保持相同邻接矩阵的实例,从而平衡不同群体之间敏感值的分布。随后,我们采用解缠对比学习策略来获取非敏感属性的解缠表示,以使敏感信息不影响节点信息的预测。最后,将学习到的非敏感属性的公平表示用于构建公平预测模型。在三个真实世界数据集上进行的大量实验表明,与基线方法相比,FDGNN产生了最佳的公平性预测。此外,结果证明了解缠在学习公平表示方面的潜力。