Alderman Joseph E, Palmer Joanne, Laws Elinor, McCradden Melissa D, Ordish Johan, Ghassemi Marzyeh, Pfohl Stephen R, Rostamzadeh Negar, Cole-Lewis Heather, Glocker Ben, Calvert Melanie, Pollard Tom J, Gill Jaspret, Gath Jacqui, Adebajo Adewale, Beng Jude, Leung Cassandra H, Kuku Stephanie, Farmer Lesley-Anne, Matin Rubeta N, Mateen Bilal A, McKay Francis, Heller Katherine, Karthikesalingam Alan, Treanor Darren, Mackintosh Maxine, Oakden-Rayner Lauren, Pearson Russell, Manrai Arjun K, Myles Puja, Kumuthini Judit, Kapacee Zoher, Sebire Neil J, Nazer Lama H, Seah Jarrel, Akbari Ashley, Berman Lew, Gichoya Judy W, Righetto Lorenzo, Samuel Diana, Wasswa William, Charalambides Maria, Arora Anmol, Pujari Sameer, Summers Charlotte, Sapey Elizabeth, Wilkinson Sharon, Thakker Vishal, Denniston Alastair, Liu Xiaoxuan
University Hospitals Birmingham NHS Foundation Trust, Birmingham, UK; National Institute for Health and Care Research (NIHR) Birmingham Biomedical Research Centre, Birmingham, UK; University of Birmingham, Birmingham, UK.
Department of Bioethics, The Hospital for Sick Children, Toronto, ON, Canada; Genetics and Genome Biology, SickKids Research Institute, Toronto, ON, Canada.
Lancet Digit Health. 2025 Jan;7(1):e64-e88. doi: 10.1016/S2589-7500(24)00224-3. Epub 2024 Dec 18.
Without careful dissection of the ways in which biases can be encoded into artificial intelligence (AI) health technologies, there is a risk of perpetuating existing health inequalities at scale. One major source of bias is the data that underpins such technologies. The STANDING Together recommendations aim to encourage transparency regarding limitations of health datasets and proactive evaluation of their effect across population groups. Draft recommendation items were informed by a systematic review and stakeholder survey. The recommendations were developed using a Delphi approach, supplemented by a public consultation and international interview study. Overall, more than 350 representatives from 58 countries provided input into this initiative. 194 Delphi participants from 25 countries voted and provided comments on 32 candidate items across three electronic survey rounds and one in-person consensus meeting. The 29 STANDING Together consensus recommendations are presented here in two parts. Recommendations for Documentation of Health Datasets provide guidance for dataset curators to enable transparency around data composition and limitations. Recommendations for Use of Health Datasets aim to enable identification and mitigation of algorithmic biases that might exacerbate health inequalities. These recommendations are intended to prompt proactive inquiry rather than acting as a checklist. We hope to raise awareness that no dataset is free of limitations, so transparent communication of data limitations should be perceived as valuable, and absence of this information as a limitation. We hope that adoption of the STANDING Together recommendations by stakeholders across the AI health technology lifecycle will enable everyone in society to benefit from technologies which are safe and effective.
如果不仔细剖析偏见可能被编入人工智能(AI)健康技术的方式,就有可能大规模延续现有的健康不平等现象。偏见的一个主要来源是支撑此类技术的数据。“携手共进”建议旨在鼓励提高健康数据集局限性的透明度,并积极评估其对不同人群的影响。建议草案项目是通过系统评价和利益相关者调查得出的。这些建议采用德尔菲法制定,并辅以公众咨询和国际访谈研究。总体而言,来自58个国家的350多名代表为该倡议提供了意见。来自25个国家的194名德尔菲参与者在三轮电子调查和一次面对面共识会议中,对32个候选项目进行了投票并发表了评论。这里分两部分介绍29条“携手共进”共识建议。健康数据集文档编制建议为数据集管理者提供指导,以提高数据构成和局限性的透明度。健康数据集使用建议旨在识别和减轻可能加剧健康不平等的算法偏见。这些建议旨在促使人们积极探究,而不是作为一份清单。我们希望提高人们的认识,即没有一个数据集是没有局限性的,因此数据局限性的透明沟通应被视为有价值的,而缺乏这些信息则应被视为一种局限性。我们希望人工智能健康技术生命周期中的各利益相关者采纳“携手共进”建议,使社会中的每个人都能受益于安全有效的技术。