Digital Products and Informatics Division, DigITs, Memorial Sloan Kettering Cancer Center, New York, NY.
Digital Products and Informatics Division, DigITs, Memorial Sloan Kettering Cancer Center, New York, NY; Department of Medicine, Memorial Sloan Kettering Cancer Center, New York, NY.
J Biomed Inform. 2023 Dec;148:104550. doi: 10.1016/j.jbi.2023.104550. Epub 2023 Nov 20.
BACKGROUND: Artificial intelligence and machine learning (AI/ML) technologies like generative and ambient AI solutions are proliferating in real-world healthcare settings. Clinician trust affects adoption and impact of these systems. Organizations need a validated method to assess factors underlying trust and acceptance of AI for clinical workflows in order to improve adoption and the impact of AI. OBJECTIVE: Our study set out to develop and assess a novel clinician-centered model to measure and explain trust and adoption of AI technology. We hypothesized that clinicians' system-specific Trust in AI is the primary predictor of both Acceptance (i.e., willingness to adopt), and post-adoption Trusting Stance (i.e., general stance towards any AI system). We validated the new model at an urban comprehensive cancer center. We produced an easily implemented survey tool for measuring clinician trust and adoption of AI. METHODS: This survey-based, cross-sectional, psychometric study included a model development phase and validation phase. Measurement was done with five-point ascending unidirectional Likert scales. The development sample included N = 93 clinicians (physicians, advanced practice providers, nurses) that used an AI-based communication application. The validation sample included N = 73 clinicians that used a commercially available AI-powered speech-to-text application for note-writing in an electronic health record (EHR). Analytical procedures included exploratory factor analysis (EFA), confirmatory factor analysis (CFA), and partial least squares structural equation modeling (PLS-SEM). The Johnson-Neyman (JN) methodology was used to determine moderator effects. RESULTS: In the fully moderated causal model, clinician trust explained a large amount of variance in their acceptance of a specific AI application (56%) and their post-adoption general trusting stance towards AI in general (36%). Moderators included organizational assurances, length of time using the application, and clinician age. The final validated instrument has 20 items and takes 5 min to complete on average. CONCLUSIONS: We found that clinician acceptance of AI is determined by their degree of trust formed via information credibility, perceived application value, and reliability. The novel model, TrAAIT, explains factors underlying AI trustworthiness and acceptance for clinicians. With its easy-to-use instrument and Summative Score Dashboard, TrAAIT can help organizations implementing AI to identify and intercept barriers to clinician adoption in real-world settings.
背景:人工智能和机器学习(AI/ML)技术,如生成式和环境 AI 解决方案,正在现实世界的医疗保健环境中迅速普及。临床医生的信任会影响这些系统的采用和影响。为了提高 AI 的采用率和影响力,组织需要一种经过验证的方法来评估临床工作流程中对 AI 的信任和接受的潜在因素。
目的:我们的研究旨在开发和评估一种新的以临床医生为中心的模型,以衡量和解释对 AI 技术的信任和采用。我们假设,临床医生对特定系统的 AI 信任是接受(即采用意愿)和采用后信任立场(即对任何 AI 系统的总体立场)的主要预测因素。我们在一家城市综合性癌症中心对新模型进行了验证。我们制作了一个易于实施的调查工具,用于衡量临床医生对 AI 的信任和采用。
方法:这项基于调查的、横断面的、心理测量学研究包括模型开发阶段和验证阶段。测量采用五点递增单向利克特量表进行。开发样本包括 N=93 名使用基于 AI 的通信应用程序的临床医生(医生、高级实践提供者、护士)。验证样本包括 N=73 名在电子健康记录(EHR)中使用商用 AI 驱动的语音转文本应用程序进行记录的临床医生。分析程序包括探索性因素分析(EFA)、验证性因素分析(CFA)和偏最小二乘结构方程建模(PLS-SEM)。采用约翰逊-内曼(JN)方法确定调节效应。
结果:在完全调节的因果模型中,临床医生的信任解释了他们对特定 AI 应用程序的接受程度(56%)和他们对 AI 的一般采用后信任立场(36%)的大量差异。调节因素包括组织保证、使用应用程序的时间长度和临床医生的年龄。最终验证的工具包含 20 个项目,平均需要 5 分钟完成。
结论:我们发现,临床医生对 AI 的接受程度取决于他们通过信息可信度、感知应用价值和可靠性形成的信任程度。新的 TrAAIT 模型解释了临床医生对 AI 可信度和接受度的潜在因素。通过使用简单易用的工具和总结得分仪表板,TrAAIT 可以帮助实施 AI 的组织在现实环境中识别和拦截临床医生采用的障碍。
J Med Internet Res. 2025-3-21
JMIR Hum Factors. 2024-1-17
NPJ Digit Med. 2025-7-4
World J Gastrointest Endosc. 2024-3-16
NPJ Digit Med. 2022-12-28
Front Digit Health. 2022-8-23
Front Psychol. 2022-5-27
NPJ Digit Med. 2021-9-9
N Engl J Med. 2021-7-15