Xie Serena Jinchen, Spice Carolin, Wedgeworth Patrick, Langevin Raina, Lybarger Kevin, Singh Angad Preet, Wood Brian R, Klein Jared W, Hsieh Gary, Duber Herbert C, Hartzler Andrea L
Biomedical Informatics and Medical Education, School of Medicine, University of Washington, Seattle, WA 98195, United States.
Information Sciences and Technology, George Mason University, Fairfax, VA 22030, United States.
J Am Med Inform Assoc. 2025 May 1;32(5):855-865. doi: 10.1093/jamia/ocaf046.
Artificial Intelligence (AI)-based approaches for extracting Social Drivers of Health (SDoH) from clinical notes offer healthcare systems an efficient way to identify patients' social needs, yet we know little about the acceptability of this approach to patients and clinicians. We investigated patient and clinician acceptability through interviews.
We interviewed primary care patients experiencing social needs (n = 19) and clinicians (n = 14) about their acceptability of "SDoH autosuggest," an AI-based approach for extracting SDoH from clinical notes. We presented storyboards depicting the approach and asked participants to rate their acceptability and discuss their rationale.
Participants rated SDoH autosuggest moderately acceptable (mean = 3.9/5 patients; mean = 3.6/5 clinicians). Patients' ratings varied across domains, with substance use rated most and employment rated least acceptable. Both groups raised concern about information integrity, actionability, impact on clinical interactions and relationships, and privacy. In addition, patients raised concern about transparency, autonomy, and potential harm, whereas clinicians raised concern about usability.
Despite reporting moderate acceptability of the envisioned approach, patients and clinicians expressed multiple concerns about AI systems that extract SDoH. Participants emphasized the need for high-quality data, non-intrusive presentation methods, and clear communication strategies regarding sensitive social needs. Findings underscore the importance of engaging patients and clinicians to mitigate unintended consequences when integrating AI approaches into care.
Although AI approaches like SDoH autosuggest hold promise for efficiently identifying SDoH from clinical notes, they must also account for concerns of patients and clinicians to ensure these systems are acceptable and do not undermine trust.
基于人工智能(AI)从临床记录中提取健康社会驱动因素(SDoH)的方法为医疗系统提供了一种识别患者社会需求的有效途径,但我们对这种方法在患者和临床医生中的可接受性知之甚少。我们通过访谈调查了患者和临床医生的可接受性。
我们就“健康社会驱动因素自动建议”(一种基于人工智能从临床记录中提取健康社会驱动因素的方法)的可接受性,采访了有社会需求的初级保健患者(n = 19)和临床医生(n = 14)。我们展示了描述该方法的情节串联图板,并要求参与者对其可接受性进行评分并讨论理由。
参与者对健康社会驱动因素自动建议的评价为中等可接受(患者平均评分为3.9/5;临床医生平均评分为3.6/5)。患者的评分在不同领域有所不同,物质使用方面的评分最高,就业方面的评分最低。两组都对信息完整性、可操作性、对临床互动和关系的影响以及隐私表示担忧。此外,患者还对透明度、自主性和潜在危害表示担忧,而临床医生则对可用性表示担忧。
尽管报告称对设想的方法有中等可接受性,但患者和临床医生对提取健康社会驱动因素的人工智能系统表达了多种担忧。参与者强调需要高质量的数据、非侵入性的呈现方法以及关于敏感社会需求的清晰沟通策略。研究结果强调了让患者和临床医生参与进来以减轻将人工智能方法整合到护理中时产生的意外后果的重要性。
尽管像健康社会驱动因素自动建议这样的人工智能方法有望从临床记录中有效识别健康社会驱动因素,但它们也必须考虑患者和临床医生的担忧,以确保这些系统是可接受的,并且不会破坏信任。