Department of Engineering Science, Institute of Biomedical Engineering, University of Oxford, Oxford, England.
Nuffield Department of Population Health, University of Oxford, Oxford, England.
Sci Rep. 2024 Jun 10;14(1):13318. doi: 10.1038/s41598-024-64210-5.
Collaborative efforts in artificial intelligence (AI) are increasingly common between high-income countries (HICs) and low- to middle-income countries (LMICs). Given the resource limitations often encountered by LMICs, collaboration becomes crucial for pooling resources, expertise, and knowledge. Despite the apparent advantages, ensuring the fairness and equity of these collaborative models is essential, especially considering the distinct differences between LMIC and HIC hospitals. In this study, we show that collaborative AI approaches can lead to divergent performance outcomes across HIC and LMIC settings, particularly in the presence of data imbalances. Through a real-world COVID-19 screening case study, we demonstrate that implementing algorithmic-level bias mitigation methods significantly improves outcome fairness between HIC and LMIC sites while maintaining high diagnostic sensitivity. We compare our results against previous benchmarks, utilizing datasets from four independent United Kingdom Hospitals and one Vietnamese hospital, representing HIC and LMIC settings, respectively.
人工智能(AI)领域的国际合作日益普遍,合作方通常为高收入国家(HICs)和中低收入国家(LMICs)。鉴于 LMICs 通常面临资源限制,合作对于汇集资源、专业知识和知识至关重要。尽管存在明显的优势,但确保这些合作模式的公平性和公正性至关重要,特别是考虑到 LMIC 和 HIC 医院之间的明显差异。在这项研究中,我们表明,协作 AI 方法可能会导致 HIC 和 LMIC 环境中的性能结果出现差异,特别是在存在数据不平衡的情况下。通过一项真实的 COVID-19 筛查案例研究,我们证明了在维持高诊断灵敏度的同时,实施算法级别的偏差缓解方法可显著提高 HIC 和 LMIC 站点之间的结果公平性。我们将我们的结果与以前的基准进行了比较,使用了来自英国四家不同医院和一家越南医院的数据集,分别代表 HIC 和 LMIC 环境。