Suppr超能文献

放射学人工智能模型的无偏公平性评估

Unbiasing Fairness Evaluation of Radiology AI Model.

作者信息

Liang Yuxuan, Chao Hanqing, Zhang Jiajin, Wang Ge, Yan Pingkun

机构信息

Biomedical Imaging Center, Center for Biotechnology and Interdisciplinary Studies, Department of Biomedical Engineering, Rensselaer Polytechnic Institute, 110 8th, St, Troy, 12180, New York, United States.

出版信息

Meta Radiol. 2024 Sep;2(3). doi: 10.1016/j.metrad.2024.100084. Epub 2024 Jun 13.

Abstract

Fairness of artificial intelligence and machine learning models, often caused by imbalanced datasets, has long been a concern. While many efforts aim to minimize model bias, this study suggests that traditional fairness evaluation methods may be biased, highlighting the need for a proper evaluation scheme with multiple evaluation metrics due to varying results under different criteria. Moreover, the limited data size of minority groups introduces significant data uncertainty, which can undermine the judgement of fairness. This paper introduces an innovative evaluation approach that estimates data uncertainty in minority groups through bootstrapping from majority groups for a more objective statistical assessment. Extensive experiments reveal that traditional evaluation methods might have drawn inaccurate conclusions about model fairness. The proposed method delivers an unbiased fairness assessment by adeptly addressing the inherent complications of model evaluation on imbalanced datasets. The results show that such comprehensive evaluation can provide more confidence when adopting those models.

摘要

人工智能和机器学习模型的公平性长期以来一直是一个备受关注的问题,其往往由不平衡的数据集导致。尽管许多努力旨在尽量减少模型偏差,但本研究表明,传统的公平性评估方法可能存在偏差,这凸显了由于在不同标准下结果各异而需要采用具有多个评估指标的适当评估方案。此外,少数群体有限的数据规模带来了显著的数据不确定性,这可能会破坏公平性判断。本文介绍了一种创新的评估方法,该方法通过从多数群体进行自助抽样来估计少数群体的数据不确定性,以进行更客观的统计评估。大量实验表明,传统评估方法可能对模型公平性得出不准确的结论。所提出的方法通过巧妙地解决不平衡数据集上模型评估的固有复杂性,提供了无偏差的公平性评估。结果表明,这种全面评估在采用这些模型时可以提供更大的信心。

相似文献

1
Unbiasing Fairness Evaluation of Radiology AI Model.放射学人工智能模型的无偏公平性评估
Meta Radiol. 2024 Sep;2(3). doi: 10.1016/j.metrad.2024.100084. Epub 2024 Jun 13.

本文引用的文献

1
Fairness-aware recommendation with meta learning.基于元学习的公平感知推荐
Sci Rep. 2024 May 2;14(1):10125. doi: 10.1038/s41598-024-60808-x.
3
Metrics reloaded: recommendations for image analysis validation.重新加载指标:图像分析验证的建议。
Nat Methods. 2024 Feb;21(2):195-212. doi: 10.1038/s41592-023-02151-z. Epub 2024 Feb 12.
6
Diffusion Models in Vision: A Survey.视觉中的扩散模型:综述
IEEE Trans Pattern Anal Mach Intell. 2023 Sep;45(9):10850-10869. doi: 10.1109/TPAMI.2023.3261988. Epub 2023 Aug 7.
7
Toward Adversarial Robustness in Unlabeled Target Domains.走向未标记目标域中的对抗鲁棒性。
IEEE Trans Image Process. 2023;32:1272-1284. doi: 10.1109/TIP.2023.3242141. Epub 2023 Feb 28.
8
Global healthcare fairness: We should be sharing more, not less, data.全球医疗公平性:我们应该更多地共享数据,而非更少。
PLOS Digit Health. 2022 Oct 6;1(10):e0000102. doi: 10.1371/journal.pdig.0000102. eCollection 2022 Oct.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验