Panagides Reanna, Keim-Malpass Jessica
University of Virginia, School of Nursing, Charlottesville, VA, USA.
University of Virginia, School of Medicine, Charlottesville VA, USA.
Int J Nurs Stud Adv. 2025 Jul 6;9:100380. doi: 10.1016/j.ijnsa.2025.100380. eCollection 2025 Dec.
Clinical algorithms are commonly used as decision-support tools, incorporating patient-specific characteristics to predict health outcomes. Risk calculators are clinical algorithms particularly suited for resource allocation based on risk estimation. Although these calculators typically use physiologic data in estimation, they frequently include demographic variables such as race, sex, and age as well. In recent years, the inclusion of race as an input variable has been scrutinized for being reductive, serving as a poor proxy for biological differences, and contributing to the inequitable distribution of services. Little attention has been given to other demographic features, such as sex and age, and their potential to produce similar consequences. By applying a framework for understanding sources of harm throughout the machine learning life cycle and presenting case studies, this paper aims to examine sources of potential harms (i.e. representational and allocative harm) associated with including sex and age in clinical decision-making algorithms, particularly risk calculators. In doing so, this paper demonstrates how systematic discrimination, reductive measurement practices, and observed differences in risk estimation between demographic groups contribute to representational and allocative harm caused by including sex and age in clinical algorithms used for resource distribution. This paper ultimately, urges clinicians to scrutinize the practice of including reductive demographic features (i.e. race, binary-coded sex, and chronological age) as proxies for underlying biological mechanisms in their risk estimations as it violates the bioethical principles of justice and nonmaleficence. Practicing clinicians, including nurses, must have an underlying model literacy to address potential biases introduced in algorithm development, validation, and clinical practice.
临床算法通常用作决策支持工具,纳入患者特定特征以预测健康结果。风险计算器是特别适用于基于风险估计进行资源分配的临床算法。尽管这些计算器通常在估计中使用生理数据,但它们也经常包括种族、性别和年龄等人口统计学变量。近年来,将种族作为输入变量的做法受到了审视,因其具有简化性,是生物差异的不良替代指标,并导致服务分配不公。对于其他人口统计学特征,如性别和年龄,以及它们产生类似后果的可能性,人们关注甚少。通过应用一个在机器学习生命周期中理解危害来源的框架并呈现案例研究,本文旨在研究与在临床决策算法(特别是风险计算器)中纳入性别和年龄相关的潜在危害来源(即代表性危害和分配性危害)。在此过程中,本文展示了系统歧视、简化的测量方法以及不同人口群体之间在风险估计中观察到的差异如何导致在用于资源分配的临床算法中纳入性别和年龄所造成的代表性危害和分配性危害。本文最终敦促临床医生审视在风险估计中纳入简化的人口统计学特征(即种族、二元编码性别和实足年龄)以替代潜在生物学机制的做法,因为这违反了公正和不伤害的生物伦理原则。包括护士在内的执业临床医生必须具备潜在的模型素养,以应对算法开发、验证和临床实践中引入的潜在偏差。