Mittermaier Mirja, Raza Marium M, Kvedar Joseph C
Charité-Universitätsmedizin Berlin, Corporate Member of Freie Universität Berlin and Humboldt-Universität zu Berlin, Department of Infectious Diseases, Respiratory Medicine and Critical Care, Berlin, Germany.
Berlin Institute of Health at Charité-Universitätsmedizin Berlin, Charitéplatz 1, 10117, Berlin, Germany.
NPJ Digit Med. 2023 Jun 14;6(1):113. doi: 10.1038/s41746-023-00858-z.
Artificial intelligence systems are increasingly being applied to healthcare. In surgery, AI applications hold promise as tools to predict surgical outcomes, assess technical skills, or guide surgeons intraoperatively via computer vision. On the other hand, AI systems can also suffer from bias, compounding existing inequities in socioeconomic status, race, ethnicity, religion, gender, disability, or sexual orientation. Bias particularly impacts disadvantaged populations, which can be subject to algorithmic predictions that are less accurate or underestimate the need for care. Thus, strategies for detecting and mitigating bias are pivotal for creating AI technology that is generalizable and fair. Here, we discuss a recent study that developed a new strategy to mitigate bias in surgical AI systems.
人工智能系统正越来越多地应用于医疗保健领域。在外科手术中,人工智能应用有望成为预测手术结果、评估技术技能或通过计算机视觉在术中指导外科医生的工具。另一方面,人工智能系统也可能存在偏差,加剧社会经济地位、种族、民族、宗教、性别、残疾或性取向方面现有的不平等。偏差尤其会影响弱势群体,他们可能会受到不太准确或低估护理需求的算法预测。因此,检测和减轻偏差的策略对于创建可推广且公平的人工智能技术至关重要。在此,我们讨论一项最近的研究,该研究开发了一种减轻外科人工智能系统偏差的新策略。