Bedmutha Manas Satish, Bascom Emily, Sladek Kimberly R, Tobar Kelly, Casanova-Perez Reggie, Andreiu Alexandra, Bhat Amrit, Mangal Sabrina, Wood Brian R, Sabin Janice, Pratt Wanda, Weibel Nadir, Hartzler Andrea L
Department of Computer Science and Engineering, University of California San Diego, La Jolla, CA 92093, United States.
Department of Human Centered Design and Engineering, School of Engineering, University of Washington, Seattle, WA 98195, United States.
JAMIA Open. 2024 Oct 18;7(4):ooae106. doi: 10.1093/jamiaopen/ooae106. eCollection 2024 Dec.
Implicit bias perpetuates health care inequities and manifests in patient-provider interactions, particularly nonverbal social cues like dominance. We investigated the use of artificial intelligence (AI) for automated communication assessment and feedback during primary care visits to raise clinician awareness of bias in patient interactions.
(1) Assessed the technical performance of our AI models by building a machine-learning pipeline that automatically detects social signals in patient-provider interactions from 145 primary care visits. (2) Engaged 24 clinicians to design usable AI-generated communication feedback for their workflow. (3) Evaluated the impact of our AI-based approach in a prospective cohort of 108 primary care visits.
Findings demonstrate the feasibility of AI models to identify social signals, such as dominance, warmth, engagement, and interactivity, in nonverbal patient-provider communication. Although engaged clinicians preferred feedback delivered in personalized dashboards, they found nonverbal cues difficult to interpret, motivating social signals as an alternative feedback mechanism. Impact evaluation demonstrated fairness in all AI models with better generalizability of provider dominance, provider engagement, and patient warmth. Stronger clinician implicit race bias was associated with less provider dominance and warmth. Although clinicians expressed overall interest in our AI approach, they recommended improvements to enhance acceptability, feasibility, and implementation in telehealth and medical education contexts.
Findings demonstrate promise for AI-driven communication assessment and feedback systems focused on social signals. Future work should improve the performance of this approach, personalize models, and contextualize feedback, and investigate system implementation in educational workflows. This work exemplifies a systematic, multistage approach for evaluating AI tools designed to raise clinician awareness of implicit bias and promote patient-centered, equitable health care interactions.
隐性偏见使医疗保健不平等长期存在,并在患者与医疗服务提供者的互动中表现出来,特别是在诸如主导地位等非语言社交线索中。我们研究了在初级保健就诊期间使用人工智能(AI)进行自动沟通评估和反馈,以提高临床医生对患者互动中偏见的认识。
(1)通过构建一个机器学习管道来评估我们的AI模型的技术性能,该管道可自动检测来自145次初级保健就诊的患者与医疗服务提供者互动中的社交信号。(2)邀请24名临床医生为他们的工作流程设计可用的由AI生成的沟通反馈。(3)在一个由108次初级保健就诊组成的前瞻性队列中评估我们基于AI的方法的影响。
研究结果证明了AI模型在识别患者与医疗服务提供者非语言沟通中的社交信号(如主导地位、热情、参与度和互动性)方面的可行性。尽管参与的临床医生更喜欢在个性化仪表板中提供的反馈,但他们发现非语言线索难以解读,这促使将社交信号作为一种替代反馈机制。影响评估表明,所有AI模型都具有公平性,且在提供者主导地位、提供者参与度和患者热情方面具有更好的通用性。临床医生更强的隐性种族偏见与提供者较低的主导地位和热情相关。尽管临床医生对我们的AI方法总体上表示感兴趣,但他们建议进行改进,以提高在远程医疗和医学教育环境中的可接受性、可行性和实施效果。
研究结果表明,专注于社交信号的AI驱动的沟通评估和反馈系统具有前景。未来的工作应提高这种方法的性能,使模型个性化,将反馈情境化,并研究在教育工作流程中的系统实施情况。这项工作体现了一种系统的、多阶段的方法,用于评估旨在提高临床医生对隐性偏见的认识并促进以患者为中心的公平医疗保健互动的AI工具。