Health Informatics Department, Hospital Italiano de Buenos Aires, Ciudad Autónoma de Buenos Aires, Argentina.
Universidad Tecnológica Nacional, Ciudad Autónoma de Buenos Aires, Argentina.
Nat Commun. 2022 Aug 6;13(1):4581. doi: 10.1038/s41467-022-32186-3.
A plethora of work has shown that AI systems can systematically and unfairly be biased against certain populations in multiple scenarios. The field of medical imaging, where AI systems are beginning to be increasingly adopted, is no exception. Here we discuss the meaning of fairness in this area and comment on the potential sources of biases, as well as the strategies available to mitigate them. Finally, we analyze the current state of the field, identifying strengths and highlighting areas of vacancy, challenges and opportunities that lie ahead.
大量研究表明,人工智能系统在多个场景中可能会系统性且不公平地对某些群体产生偏见。医学成像领域也不例外,人工智能系统开始在这个领域得到越来越多的应用。在这里,我们讨论了这个领域公平性的含义,并对潜在的偏见来源以及可用的缓解策略进行了评论。最后,我们分析了该领域的现状,确定了其优势,并强调了未来的空缺领域、挑战和机遇。