Tsai Chuan-Ching, Kim Jin Yong, Chen Qiyuan, Rowell Brigid, Yang X Jessie, Kontar Raed, Whitaker Megan, Lester Corey
Department of Clinical Pharmacy, College of Pharmacy, University of Michigan, Ann Arbor, MI, United States.
Department of Industrial and Operations Engineering, College of Engineering, University of Michigan, Ann Arbor, MI, United States.
J Med Internet Res. 2025 Jan 31;27:e59946. doi: 10.2196/59946.
Clinical decision support systems leveraging artificial intelligence (AI) are increasingly integrated into health care practices, including pharmacy medication verification. Communicating uncertainty in an AI prediction is viewed as an important mechanism for boosting human collaboration and trust. Yet, little is known about the effects on human cognition as a result of interacting with such types of AI advice.
This study aimed to evaluate the cognitive interaction patterns of pharmacists during medication product verification when using an AI prototype. Moreover, we examine the impact of AI's assistance, both helpful and unhelpful, and the communication of uncertainty of AI-generated results on pharmacists' cognitive interaction with the prototype.
In a randomized controlled trial, 30 pharmacists from professional networks each performed 200 medication verification tasks while their eye movements were recorded using an online eye tracker. Participants completed 100 verifications without AI assistance and 100 with AI assistance (either with black box help without uncertainty information or uncertainty-aware help, which displays AI uncertainty). Fixation patterns (first and last areas fixated, number of fixations, fixation duration, and dwell times) were analyzed in relation to AI help type and helpfulness.
Pharmacists shifted 19%-26% of their total fixations to AI-generated regions when these were available, suggesting the integration of AI advice in decision-making. AI assistance did not reduce the number of fixations on fill images, which remained the primary focus area. Unhelpful AI advice led to longer dwell times on reference and fill images, indicating increased cognitive processing. Displaying AI uncertainty led to longer cognitive processing times as measured by dwell times in original images.
Unhelpful AI increases cognitive processing time in the original images. Transparency in AI is needed in "black box" systems, but showing more information can add a cognitive burden. Therefore, the communication of uncertainty should be optimized and integrated into clinical workflows using user-centered design to avoid increasing cognitive load or impeding clinicians' original workflow.
ClinicalTrials.gov NCT06795477; https://clinicaltrials.gov/study/NCT06795477.
利用人工智能(AI)的临床决策支持系统越来越多地融入医疗保健实践,包括药房药物验证。传达人工智能预测中的不确定性被视为促进人类协作和信任的重要机制。然而,对于与这类人工智能建议交互对人类认知的影响,我们知之甚少。
本研究旨在评估药剂师在使用人工智能原型进行药品验证期间的认知交互模式。此外,我们研究了人工智能的帮助(无论有无帮助)以及人工智能生成结果的不确定性传达对药剂师与原型认知交互的影响。
在一项随机对照试验中,来自专业网络的30名药剂师每人执行200项药物验证任务,同时使用在线眼动仪记录他们的眼动。参与者在没有人工智能协助的情况下完成100次验证,在有人工智能协助的情况下完成100次验证(要么是没有不确定性信息的黑箱帮助,要么是显示人工智能不确定性的不确定性感知帮助)。根据人工智能帮助类型和有用性分析注视模式(首次和最后注视区域、注视次数、注视持续时间和停留时间)。
当有可用的人工智能生成区域时,药剂师将其总注视次数的19%-26%转移到这些区域,这表明人工智能建议已融入决策过程。人工智能协助并没有减少对填充图像的注视次数,填充图像仍然是主要关注区域。无用的人工智能建议导致在参考图像和填充图像上的停留时间更长,表明认知处理增加。通过原始图像中的停留时间衡量,显示人工智能不确定性导致更长的认知处理时间。
无用的人工智能会增加原始图像中的认知处理时间。“黑箱”系统需要人工智能的透明度,但显示更多信息可能会增加认知负担。因此,应优化不确定性的传达,并使用以用户为中心的设计将其整合到临床工作流程中,以避免增加认知负荷或阻碍临床医生的原始工作流程。
ClinicalTrials.gov NCT06795477;https://clinicaltrials.gov/study/NCT06795477 。