Anderson Joshua W, Visweswaran Shyam
Intelligent Systems Program, University of Pittsburgh, Pittsburgh, PA 15213, United States.
Biomedical Informatics, University of Pittsburgh, Pittsburgh, PA 15213, United States.
JAMIA Open. 2024 Dec 30;8(1):ooae149. doi: 10.1093/jamiaopen/ooae149. eCollection 2025 Feb.
Statistical and artificial intelligence algorithms are increasingly being developed for use in healthcare. These algorithms may reflect biases that magnify disparities in clinical care, and there is a growing need for understanding how algorithmic biases can be mitigated in pursuit of algorithmic fairness. We conducted a scoping review on algorithmic individual fairness (IF) to understand the current state of research in the metrics and methods developed to achieve IF and their applications in healthcare.
We searched four databases: PubMed, ACM Digital Library, IEEE Xplore, and medRxiv for algorithmic IF metrics, algorithmic bias mitigation, and healthcare applications. Our search was restricted to articles published between January 2013 and November 2024. We identified 2498 articles through database searches and seven additional articles, of which 32 articles were included in the review. Data from the selected articles were extracted, and the findings were synthesized.
Based on the 32 articles in the review, we identified several themes, including philosophical underpinnings of fairness, IF metrics, mitigation methods for achieving IF, implications of achieving IF on group fairness and vice versa, and applications of IF in healthcare.
We find that research of IF is still in their early stages, particularly in healthcare, as evidenced by the limited number of relevant articles published between 2013 and 2024. While healthcare applications of IF remain sparse, growth has been steady in number of publications since 2012. The limitations of group fairness further emphasize the need for alternative approaches like IF. However, IF itself is not without challenges, including subjective definitions of similarity and potential bias encoding from data-driven methods. These findings, coupled with the limitations of the review process, underscore the need for more comprehensive research on the evolution of IF metrics and definitions to advance this promising field.
While significant work has been done on algorithmic IF in recent years, the definition, use, and study of IF remain in their infancy, especially in healthcare. Future research is needed to comprehensively apply and evaluate IF in healthcare.
统计和人工智能算法在医疗保健领域的应用日益广泛。这些算法可能会反映出放大临床护理差异的偏差,因此越来越需要了解如何减轻算法偏差以追求算法公平性。我们对算法个体公平性(IF)进行了一项范围综述,以了解为实现IF而开发的指标和方法的当前研究状况及其在医疗保健中的应用。
我们在四个数据库中进行了搜索:PubMed、ACM数字图书馆、IEEE Xplore和medRxiv,搜索算法IF指标、算法偏差缓解和医疗保健应用。我们的搜索仅限于2013年1月至2024年11月发表的文章。通过数据库搜索我们识别出2498篇文章以及另外7篇文章,其中32篇文章被纳入综述。提取所选文章的数据并综合研究结果。
基于综述中的32篇文章,我们确定了几个主题,包括公平性的哲学基础、IF指标、实现IF的缓解方法、实现IF对群体公平性的影响以及反之亦然,以及IF在医疗保健中的应用。
我们发现IF的研究仍处于早期阶段,尤其是在医疗保健领域,2013年至2024年间发表的相关文章数量有限就证明了这一点。虽然IF在医疗保健中的应用仍然很少,但自2012年以来出版物数量一直在稳步增长。群体公平性的局限性进一步凸显了对IF等替代方法的需求。然而,IF本身也并非没有挑战,包括相似性的主观定义以及数据驱动方法中潜在的偏差编码。这些发现,再加上综述过程的局限性,强调了对IF指标和定义的演变进行更全面研究以推动这一有前景领域发展的必要性。
虽然近年来在算法IF方面已经开展了大量工作,但IF的定义、使用和研究仍处于起步阶段,尤其是在医疗保健领域。未来需要开展研究以全面应用和评估IF在医疗保健中的情况。