Division of Health Policy and Management, University of Minnesota School of Public Health, 516 Delaware St SE, Minneapolis, MN 55455. Email:
Am J Manag Care. 2024 May;30(6 Spec No.):SP468-SP472. doi: 10.37765/ajmc.2024.89555.
To understand whether and how equity is considered in artificial intelligence/machine learning governance processes at academic medical centers.
Qualitative analysis of interview data.
We created a database of academic medical centers from the full list of Association of American Medical Colleges hospital and health system members in 2022. Stratifying by census region and restricting to nonfederal and nonspecialty centers, we recruited chief medical informatics officers and similarly positioned individuals from academic medical centers across the country. We created and piloted a semistructured interview guide focused on (1) how academic medical centers govern artificial intelligence and prediction and (2) to what extent equity is considered in these processes. A total of 17 individuals representing 13 institutions across 4 census regions of the US were interviewed.
A minority of participants reported considering inequity, racism, or bias in governance. Most participants conceptualized these issues as characteristics of a tool, using frameworks such as algorithmic bias or fairness. Fewer participants conceptualized equity beyond the technology itself and asked broader questions about its implications for patients. Disparities in health information technology resources across health systems were repeatedly identified as a threat to health equity.
We found a lack of consistent equity consideration among academic medical centers as they develop their governance processes for predictive technologies despite considerable national attention to the ways these technologies can cause or reproduce inequities. Health systems and policy makers will need to specifically prioritize equity literacy among health system leadership, design oversight policies, and promote critical engagement with these tools and their implications to prevent the further entrenchment of inequities in digital health care.
了解学术医疗中心的人工智能/机器学习治理流程中是否考虑到公平性,以及如何考虑公平性。
对访谈数据进行定性分析。
我们从 2022 年美国医学协会医院和卫生系统成员的完整名单中创建了一个学术医疗中心数据库。通过人口普查区域分层,并限制为非联邦和非专业中心,我们从全美学术医疗中心招募首席医疗信息官和类似职位的人员。我们创建并试点了一个半结构化访谈指南,重点关注(1)学术医疗中心如何治理人工智能和预测,以及(2)在这些过程中考虑到公平性的程度。共有 17 名代表美国 4 个人口普查区域的 13 个机构的参与者接受了采访。
少数参与者报告在治理中考虑到了不平等、种族主义或偏见。大多数参与者将这些问题视为工具的特征,使用算法偏见或公平性等框架。较少的参与者将公平性概念化超出了技术本身,并提出了更广泛的问题,涉及这些技术对患者的影响。卫生系统之间健康信息技术资源的差异被反复认为是健康公平的威胁。
尽管全国都在关注这些技术可能导致或再现不平等的方式,但我们发现学术医疗中心在制定预测技术治理流程时缺乏一致性的公平性考虑。卫生系统和政策制定者将需要特别优先考虑卫生系统领导层的公平意识,设计监督政策,并促进对这些工具及其影响的批判性参与,以防止数字医疗保健中不平等现象的进一步加剧。