文献检索文档翻译深度研究
Suppr Zotero 插件Zotero 插件
邀请有礼套餐&价格历史记录

新学期,新优惠

限时优惠:9月1日-9月22日

30天高级会员仅需29元

1天体验卡首发特惠仅需5.99元

了解详情
不再提醒
插件&应用
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
高级版
套餐订阅购买积分包
AI 工具
文献检索文档翻译深度研究
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2025

信任和接受人工智能技术理论(TrAAIT):一种评估临床医生对人工智能信任和接受程度的工具。

Theory of trust and acceptance of artificial intelligence technology (TrAAIT): An instrument to assess clinician trust and acceptance of artificial intelligence.

机构信息

Digital Products and Informatics Division, DigITs, Memorial Sloan Kettering Cancer Center, New York, NY.

Digital Products and Informatics Division, DigITs, Memorial Sloan Kettering Cancer Center, New York, NY; Department of Medicine, Memorial Sloan Kettering Cancer Center, New York, NY.

出版信息

J Biomed Inform. 2023 Dec;148:104550. doi: 10.1016/j.jbi.2023.104550. Epub 2023 Nov 20.


DOI:10.1016/j.jbi.2023.104550
PMID:37981107
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC10815802/
Abstract

BACKGROUND: Artificial intelligence and machine learning (AI/ML) technologies like generative and ambient AI solutions are proliferating in real-world healthcare settings. Clinician trust affects adoption and impact of these systems. Organizations need a validated method to assess factors underlying trust and acceptance of AI for clinical workflows in order to improve adoption and the impact of AI. OBJECTIVE: Our study set out to develop and assess a novel clinician-centered model to measure and explain trust and adoption of AI technology. We hypothesized that clinicians' system-specific Trust in AI is the primary predictor of both Acceptance (i.e., willingness to adopt), and post-adoption Trusting Stance (i.e., general stance towards any AI system). We validated the new model at an urban comprehensive cancer center. We produced an easily implemented survey tool for measuring clinician trust and adoption of AI. METHODS: This survey-based, cross-sectional, psychometric study included a model development phase and validation phase. Measurement was done with five-point ascending unidirectional Likert scales. The development sample included N = 93 clinicians (physicians, advanced practice providers, nurses) that used an AI-based communication application. The validation sample included N = 73 clinicians that used a commercially available AI-powered speech-to-text application for note-writing in an electronic health record (EHR). Analytical procedures included exploratory factor analysis (EFA), confirmatory factor analysis (CFA), and partial least squares structural equation modeling (PLS-SEM). The Johnson-Neyman (JN) methodology was used to determine moderator effects. RESULTS: In the fully moderated causal model, clinician trust explained a large amount of variance in their acceptance of a specific AI application (56%) and their post-adoption general trusting stance towards AI in general (36%). Moderators included organizational assurances, length of time using the application, and clinician age. The final validated instrument has 20 items and takes 5 min to complete on average. CONCLUSIONS: We found that clinician acceptance of AI is determined by their degree of trust formed via information credibility, perceived application value, and reliability. The novel model, TrAAIT, explains factors underlying AI trustworthiness and acceptance for clinicians. With its easy-to-use instrument and Summative Score Dashboard, TrAAIT can help organizations implementing AI to identify and intercept barriers to clinician adoption in real-world settings.

摘要

背景:人工智能和机器学习(AI/ML)技术,如生成式和环境 AI 解决方案,正在现实世界的医疗保健环境中迅速普及。临床医生的信任会影响这些系统的采用和影响。为了提高 AI 的采用率和影响力,组织需要一种经过验证的方法来评估临床工作流程中对 AI 的信任和接受的潜在因素。

目的:我们的研究旨在开发和评估一种新的以临床医生为中心的模型,以衡量和解释对 AI 技术的信任和采用。我们假设,临床医生对特定系统的 AI 信任是接受(即采用意愿)和采用后信任立场(即对任何 AI 系统的总体立场)的主要预测因素。我们在一家城市综合性癌症中心对新模型进行了验证。我们制作了一个易于实施的调查工具,用于衡量临床医生对 AI 的信任和采用。

方法:这项基于调查的、横断面的、心理测量学研究包括模型开发阶段和验证阶段。测量采用五点递增单向利克特量表进行。开发样本包括 N=93 名使用基于 AI 的通信应用程序的临床医生(医生、高级实践提供者、护士)。验证样本包括 N=73 名在电子健康记录(EHR)中使用商用 AI 驱动的语音转文本应用程序进行记录的临床医生。分析程序包括探索性因素分析(EFA)、验证性因素分析(CFA)和偏最小二乘结构方程建模(PLS-SEM)。采用约翰逊-内曼(JN)方法确定调节效应。

结果:在完全调节的因果模型中,临床医生的信任解释了他们对特定 AI 应用程序的接受程度(56%)和他们对 AI 的一般采用后信任立场(36%)的大量差异。调节因素包括组织保证、使用应用程序的时间长度和临床医生的年龄。最终验证的工具包含 20 个项目,平均需要 5 分钟完成。

结论:我们发现,临床医生对 AI 的接受程度取决于他们通过信息可信度、感知应用价值和可靠性形成的信任程度。新的 TrAAIT 模型解释了临床医生对 AI 可信度和接受度的潜在因素。通过使用简单易用的工具和总结得分仪表板,TrAAIT 可以帮助实施 AI 的组织在现实环境中识别和拦截临床医生采用的障碍。

相似文献

[1]
Theory of trust and acceptance of artificial intelligence technology (TrAAIT): An instrument to assess clinician trust and acceptance of artificial intelligence.

J Biomed Inform. 2023-12

[2]
Barriers to and facilitators of clinician acceptance and use of artificial intelligence in healthcare settings: a scoping review.

BMJ Open. 2025-4-15

[3]
Acceptance of Using Artificial Intelligence and Digital Technology for Mental Health Interventions: The Development and Initial Validation of the UTAUT-AI-DMHI.

Clin Psychol Psychother. 2025

[4]
Exploring Nurses' Behavioural Intention to Adopt AI Technology: The Perspectives of Social Influence, Perceived Job Stress and Human-Machine Trust.

J Adv Nurs. 2025-7

[5]
The Willingness of Doctors to Adopt Artificial Intelligence-Driven Clinical Decision Support Systems at Different Hospitals in China: Fuzzy Set Qualitative Comparative Analysis of Survey Data.

J Med Internet Res. 2025-1-7

[6]
Trust and Acceptance Challenges in the Adoption of AI Applications in Health Care: Quantitative Survey Analysis.

J Med Internet Res. 2025-3-21

[7]
User Intent to Use DeepSeek for Health Care Purposes and Their Trust in the Large Language Model: Multinational Survey Study.

JMIR Hum Factors. 2025-5-26

[8]
Trust in and Acceptance of Artificial Intelligence Applications in Medicine: Mixed Methods Study.

JMIR Hum Factors. 2024-1-17

[9]
User Intent to Use DeepSeek for Healthcare Purposes and their Trust in the Large Language Model: Multinational Survey Study.

JMIR Hum Factors. 2025-4-7

[10]
Modeling the influence of attitudes, trust, and beliefs on endoscopists' acceptance of artificial intelligence applications in medical practice.

Front Public Health. 2023-11-28

引用本文的文献

[1]
A Systematic Review of User Attitudes Toward GenAI: Influencing Factors and Industry Perspectives.

J Intell. 2025-6-27

[2]
Responsible Artificial Intelligence governance in oncology.

NPJ Digit Med. 2025-7-4

[3]
A Model Predicting Artificial Intelligence Use by Gastroenterology Nurses in Clinical Practice: A Cross-Sectional Multicenter Survey.

J Gastroenterol Hepatol. 2025-9

[4]
Exploring the relationship between AI literacy, AI trust, AI dependency, and 21st century skills in preservice mathematics teachers.

Sci Rep. 2025-4-24

[5]
Machine learning-based prediction models in medical decision-making in kidney disease: patient, caregiver, and clinician perspectives on trust and appropriate use.

J Am Med Inform Assoc. 2025-1-1

[6]
Assessment of health technology acceptability for remote monitoring of patients with COVID-19: A measurement model for user perceptions of pulse oximeters.

Digit Health. 2024-9-11

[7]
Human-artificial intelligence interaction in gastrointestinal endoscopy.

World J Gastrointest Endosc. 2024-3-16

本文引用的文献

[1]
An integrative review on the acceptance of artificial intelligence among healthcare professionals in hospitals.

NPJ Digit Med. 2023-6-10

[2]
Foundation models for generalist medical artificial intelligence.

Nature. 2023-4

[3]
AI in the hands of imperfect users.

NPJ Digit Med. 2022-12-28

[4]
Acceptance, initial trust formation, and human biases in artificial intelligence: Focus on clinicians.

Front Digit Health. 2022-8-23

[5]
Correcting the Bias Correction for the Bootstrap Confidence Interval in Mediation Analysis.

Front Psychol. 2022-5-27

[6]
Acceptance, Barriers, and Facilitators to Implementing Artificial Intelligence-Based Decision Support Systems in Emergency Departments: Quantitative and Qualitative Evaluation.

JMIR Form Res. 2022-6-13

[7]
Effect of risk, expectancy, and trust on clinicians' intent to use an artificial intelligence system -- Blood Utilization Calculator.

Appl Ergon. 2022-5

[8]
Artificial intelligence sepsis prediction algorithm learns to say "I don't know".

NPJ Digit Med. 2021-9-9

[9]
The Clinician and Dataset Shift in Artificial Intelligence.

N Engl J Med. 2021-7-15

[10]
External Validation of a Widely Implemented Proprietary Sepsis Prediction Model in Hospitalized Patients.

JAMA Intern Med. 2021-8-1

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

推荐工具

医学文档翻译智能文献检索