Industrial and Management Systems Engineering, Benjamin M. Statler College of Engineering and Mineral Resources, West Virginia University, Morgantown, WV, United States.
JMIR Hum Factors. 2024 May 27;11:e55399. doi: 10.2196/55399.
ChatGPT (OpenAI) is a powerful tool for a wide range of tasks, from entertainment and creativity to health care queries. There are potential risks and benefits associated with this technology. In the discourse concerning the deployment of ChatGPT and similar large language models, it is sensible to recommend their use primarily for tasks a human user can execute accurately. As we transition into the subsequent phase of ChatGPT deployment, establishing realistic performance expectations and understanding users' perceptions of risk associated with its use are crucial in determining the successful integration of this artificial intelligence (AI) technology.
The aim of the study is to explore how perceived workload, satisfaction, performance expectancy, and risk-benefit perception influence users' trust in ChatGPT.
A semistructured, web-based survey was conducted with 607 adults in the United States who actively use ChatGPT. The survey questions were adapted from constructs used in various models and theories such as the technology acceptance model, the theory of planned behavior, the unified theory of acceptance and use of technology, and research on trust and security in digital environments. To test our hypotheses and structural model, we used the partial least squares structural equation modeling method, a widely used approach for multivariate analysis.
A total of 607 people responded to our survey. A significant portion of the participants held at least a high school diploma (n=204, 33.6%), and the majority had a bachelor's degree (n=262, 43.1%). The primary motivations for participants to use ChatGPT were for acquiring information (n=219, 36.1%), amusement (n=203, 33.4%), and addressing problems (n=135, 22.2%). Some participants used it for health-related inquiries (n=44, 7.2%), while a few others (n=6, 1%) used it for miscellaneous activities such as brainstorming, grammar verification, and blog content creation. Our model explained 64.6% of the variance in trust. Our analysis indicated a significant relationship between (1) workload and satisfaction, (2) trust and satisfaction, (3) performance expectations and trust, and (4) risk-benefit perception and trust.
The findings underscore the importance of ensuring user-friendly design and functionality in AI-based applications to reduce workload and enhance user satisfaction, thereby increasing user trust. Future research should further explore the relationship between risk-benefit perception and trust in the context of AI chatbots.
ChatGPT(OpenAI)是一种功能强大的工具,可用于各种任务,从娱乐和创意到医疗保健查询。这项技术存在潜在的风险和益处。在讨论部署 ChatGPT 和类似的大型语言模型时,明智的做法是建议主要将其用于人类用户可以准确执行的任务。随着我们进入 ChatGPT 部署的下一阶段,确定其成功整合的关键是建立现实的性能预期并了解用户对其使用相关风险的看法。
本研究旨在探讨感知工作负荷、满意度、绩效期望和风险收益感知如何影响用户对 ChatGPT 的信任。
我们对 607 名美国积极使用 ChatGPT 的成年人进行了半结构式网络调查。调查问题改编自各种模型和理论中的建构,例如技术接受模型、计划行为理论、统一接受和使用技术理论以及数字环境中的信任和安全研究。为了检验我们的假设和结构模型,我们使用了偏最小二乘结构方程建模方法,这是一种广泛用于多变量分析的方法。
共有 607 人回应了我们的调查。相当一部分参与者至少持有高中文凭(n=204,33.6%),大多数人持有学士学位(n=262,43.1%)。参与者使用 ChatGPT 的主要动机是获取信息(n=219,36.1%)、娱乐(n=203,33.4%)和解决问题(n=135,22.2%)。一些参与者将其用于与健康相关的查询(n=44,7.2%),而少数其他参与者(n=6,1%)将其用于头脑风暴、语法验证和博客内容创作等杂项活动。我们的模型解释了信任的 64.6%的方差。我们的分析表明,(1)工作量和满意度之间,(2)信任和满意度之间,(3)绩效期望和信任之间,以及(4)风险收益感知和信任之间存在显著关系。
研究结果强调了确保基于人工智能的应用程序具有用户友好的设计和功能的重要性,以减少工作量并提高用户满意度,从而提高用户信任度。未来的研究应该进一步探索人工智能聊天机器人背景下风险收益感知与信任之间的关系。