J Law Health. 2021;34(2):215-251.
Systemic discrimination in healthcare plagues marginalized groups. Physicians incorrectly view people of color as having high pain tolerance, leading to undertreatment. Women with disabilities are often undiagnosed because their symptoms are dismissed. Low-income patients have less access to appropriate treatment. These patterns, and others, reflect long-standing disparities that have become engrained in U.S. health systems. As the healthcare industry adopts artificial intelligence and algorithminformed (AI) tools, it is vital that regulators address healthcare discrimination. AI tools are increasingly used to make both clinical and administrative decisions by hospitals, physicians, and insurers--yet there is no framework that specifically places nondiscrimination obligations on AI users. The Food and Drug Administration has limited authority to regulate AI and has not sought to incorporate anti-discrimination principles in its guidance. Section 1557 of the Affordable Care Act has not been used to enforce nondiscrimination in healthcare AI and is under-utilized by the Office of Civil Rights. State level protections by medical licensing boards or malpractice liability are similarly untested and have not yet extended nondiscrimination obligations to AI. This Article discusses the role of each legal obligation on healthcare AI and the ways in which each system can improve to address discrimination. It highlights the ways in which industries can self-regulate to set nondiscrimination standards and concludes by recommending standards and creating a super-regulator to address disparate impact by AI. As the world moves towards automation, it is imperative that ongoing concerns about systemic discrimination are removed to prevent further marginalization in healthcare.
医疗保健中的系统性歧视困扰着边缘化群体。医生错误地认为有色人种具有较高的疼痛耐受力,从而导致治疗不足。残疾妇女经常得不到诊断,因为她们的症状被忽视了。低收入患者获得适当治疗的机会较少。这些模式和其他模式反映了长期存在的差异,这些差异已经深深植根于美国的医疗体系中。随着医疗保健行业采用人工智能和算法驱动的(AI)工具,监管机构必须解决医疗保健歧视问题。AI 工具越来越多地被用于医院、医生和保险公司做出临床和行政决策——然而,没有框架专门规定 AI 用户的非歧视义务。美国食品和药物管理局对 AI 的监管权有限,并且没有在其指导中纳入反歧视原则。《平价医疗法案》第 1557 条尚未用于在医疗保健 AI 中执行非歧视规定,民权办公室也没有充分利用该条。医疗许可委员会或医疗事故责任的州级保护同样未经检验,并且尚未将非歧视义务扩展到 AI。本文讨论了每个法律义务在医疗保健 AI 中的作用,以及每个系统可以改进以解决歧视问题的方式。它强调了行业自我监管制定非歧视标准的方式,并通过推荐标准和创建超级监管机构来解决 AI 造成的不同影响,从而得出结论。随着世界向自动化迈进,必须消除人们对系统性歧视的持续关注,以防止医疗保健领域的进一步边缘化。