Zhang Shiying, Meng Zixuan, Chen Beibei, Yang Xiu, Zhao Xinran
Business and Economic Research Institute, Harbin University of Commerce, Harbin, China.
School of Business, Dalian University of Technology, Dalian, China.
Front Psychol. 2021 Aug 13;12:728495. doi: 10.3389/fpsyg.2021.728495. eCollection 2021.
The complexity of the emotional presentation of users to Artificial Intelligence (AI) virtual assistants is mainly manifested in user motivation and social emotion, but the current research lacks an effective conversion path from emotion to acceptance. This paper innovatively cuts from the perspective of trust, establishes an AI virtual assistant acceptance model, conducts an empirical study based on the survey data from 240 questionnaires, and uses multilevel regression analysis and the bootstrap method to analyze the data. The results showed that functionality and social emotions had a significant effect on trust, where perceived humanity showed an inverted U relationship on trust, and trust mediated the relationship between both functionality and social emotions and acceptance. The findings explain the emotional complexity of users toward AI virtual assistants and extend the transformation path of technology acceptance from the trust perspective, which has implications for the development and design of AI applications.
用户对人工智能(AI)虚拟助手的情感呈现复杂性主要体现在用户动机和社会情感方面,但目前的研究缺乏从情感到接受的有效转化路径。本文创新性地从信任角度切入,建立了AI虚拟助手接受模型,基于240份问卷的调查数据进行实证研究,并运用多层回归分析和Bootstrap方法对数据进行分析。结果表明,功能和社会情感对信任有显著影响,其中感知人性与信任呈倒U型关系,信任在功能和社会情感与接受之间起中介作用。研究结果解释了用户对AI虚拟助手的情感复杂性,并从信任角度扩展了技术接受的转化路径,对AI应用的开发和设计具有启示意义。