Suppr超能文献

信任和接受人工智能技术理论(TrAAIT):一种评估临床医生对人工智能信任和接受程度的工具。

Theory of trust and acceptance of artificial intelligence technology (TrAAIT): An instrument to assess clinician trust and acceptance of artificial intelligence.

机构信息

Digital Products and Informatics Division, DigITs, Memorial Sloan Kettering Cancer Center, New York, NY.

Digital Products and Informatics Division, DigITs, Memorial Sloan Kettering Cancer Center, New York, NY; Department of Medicine, Memorial Sloan Kettering Cancer Center, New York, NY.

出版信息

J Biomed Inform. 2023 Dec;148:104550. doi: 10.1016/j.jbi.2023.104550. Epub 2023 Nov 20.

Abstract

BACKGROUND

Artificial intelligence and machine learning (AI/ML) technologies like generative and ambient AI solutions are proliferating in real-world healthcare settings. Clinician trust affects adoption and impact of these systems. Organizations need a validated method to assess factors underlying trust and acceptance of AI for clinical workflows in order to improve adoption and the impact of AI.

OBJECTIVE

Our study set out to develop and assess a novel clinician-centered model to measure and explain trust and adoption of AI technology. We hypothesized that clinicians' system-specific Trust in AI is the primary predictor of both Acceptance (i.e., willingness to adopt), and post-adoption Trusting Stance (i.e., general stance towards any AI system). We validated the new model at an urban comprehensive cancer center. We produced an easily implemented survey tool for measuring clinician trust and adoption of AI.

METHODS

This survey-based, cross-sectional, psychometric study included a model development phase and validation phase. Measurement was done with five-point ascending unidirectional Likert scales. The development sample included N = 93 clinicians (physicians, advanced practice providers, nurses) that used an AI-based communication application. The validation sample included N = 73 clinicians that used a commercially available AI-powered speech-to-text application for note-writing in an electronic health record (EHR). Analytical procedures included exploratory factor analysis (EFA), confirmatory factor analysis (CFA), and partial least squares structural equation modeling (PLS-SEM). The Johnson-Neyman (JN) methodology was used to determine moderator effects.

RESULTS

In the fully moderated causal model, clinician trust explained a large amount of variance in their acceptance of a specific AI application (56%) and their post-adoption general trusting stance towards AI in general (36%). Moderators included organizational assurances, length of time using the application, and clinician age. The final validated instrument has 20 items and takes 5 min to complete on average.

CONCLUSIONS

We found that clinician acceptance of AI is determined by their degree of trust formed via information credibility, perceived application value, and reliability. The novel model, TrAAIT, explains factors underlying AI trustworthiness and acceptance for clinicians. With its easy-to-use instrument and Summative Score Dashboard, TrAAIT can help organizations implementing AI to identify and intercept barriers to clinician adoption in real-world settings.

摘要

背景

人工智能和机器学习(AI/ML)技术,如生成式和环境 AI 解决方案,正在现实世界的医疗保健环境中迅速普及。临床医生的信任会影响这些系统的采用和影响。为了提高 AI 的采用率和影响力,组织需要一种经过验证的方法来评估临床工作流程中对 AI 的信任和接受的潜在因素。

目的

我们的研究旨在开发和评估一种新的以临床医生为中心的模型,以衡量和解释对 AI 技术的信任和采用。我们假设,临床医生对特定系统的 AI 信任是接受(即采用意愿)和采用后信任立场(即对任何 AI 系统的总体立场)的主要预测因素。我们在一家城市综合性癌症中心对新模型进行了验证。我们制作了一个易于实施的调查工具,用于衡量临床医生对 AI 的信任和采用。

方法

这项基于调查的、横断面的、心理测量学研究包括模型开发阶段和验证阶段。测量采用五点递增单向利克特量表进行。开发样本包括 N=93 名使用基于 AI 的通信应用程序的临床医生(医生、高级实践提供者、护士)。验证样本包括 N=73 名在电子健康记录(EHR)中使用商用 AI 驱动的语音转文本应用程序进行记录的临床医生。分析程序包括探索性因素分析(EFA)、验证性因素分析(CFA)和偏最小二乘结构方程建模(PLS-SEM)。采用约翰逊-内曼(JN)方法确定调节效应。

结果

在完全调节的因果模型中,临床医生的信任解释了他们对特定 AI 应用程序的接受程度(56%)和他们对 AI 的一般采用后信任立场(36%)的大量差异。调节因素包括组织保证、使用应用程序的时间长度和临床医生的年龄。最终验证的工具包含 20 个项目,平均需要 5 分钟完成。

结论

我们发现,临床医生对 AI 的接受程度取决于他们通过信息可信度、感知应用价值和可靠性形成的信任程度。新的 TrAAIT 模型解释了临床医生对 AI 可信度和接受度的潜在因素。通过使用简单易用的工具和总结得分仪表板,TrAAIT 可以帮助实施 AI 的组织在现实环境中识别和拦截临床医生采用的障碍。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1e5f/10815802/082c3ea8ab9a/nihms-1947340-f0005.jpg

相似文献

引用本文的文献

2
Responsible Artificial Intelligence governance in oncology.肿瘤学中的负责任人工智能治理
NPJ Digit Med. 2025 Jul 4;8(1):407. doi: 10.1038/s41746-025-01794-w.
7
Human-artificial intelligence interaction in gastrointestinal endoscopy.胃肠内镜检查中的人机交互
World J Gastrointest Endosc. 2024 Mar 16;16(3):126-135. doi: 10.4253/wjge.v16.i3.126.

本文引用的文献

2
Foundation models for generalist medical artificial intelligence.通用型医学人工智能的基础模型。
Nature. 2023 Apr;616(7956):259-265. doi: 10.1038/s41586-023-05881-4. Epub 2023 Apr 12.
3
AI in the hands of imperfect users.不完美的用户手中的人工智能。
NPJ Digit Med. 2022 Dec 28;5(1):197. doi: 10.1038/s41746-022-00737-z.

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验