Industrial and Management Systems Engineering, Benjamin M. Statler College of Engineering and Mineral Resources, West Virginia University, Morgantown, West Virginia, United States of America.
Columbia University School of Nursing, Columbia University Irving Medical Center, New York, New York, United States of America.
PLoS One. 2024 Mar 8;19(3):e0296151. doi: 10.1371/journal.pone.0296151. eCollection 2024.
As ChatGPT emerges as a potential ally in healthcare decision-making, it is imperative to investigate how users leverage and perceive it. The repurposing of technology is innovative but brings risks, especially since AI's effectiveness depends on the data it's fed. In healthcare, ChatGPT might provide sound advice based on current medical knowledge, which could turn into misinformation if its data sources later include erroneous information. Our study assesses user perceptions of ChatGPT, particularly of those who used ChatGPT for healthcare-related queries. By examining factors such as competence, reliability, transparency, trustworthiness, security, and persuasiveness of ChatGPT, the research aimed to understand how users rely on ChatGPT for health-related decision-making. A web-based survey was distributed to U.S. adults using ChatGPT at least once a month. Bayesian Linear Regression was used to understand how much ChatGPT aids in informed decision-making. This analysis was conducted on subsets of respondents, both those who used ChatGPT for healthcare decisions and those who did not. Qualitative data from open-ended questions were analyzed using content analysis, with thematic coding to extract public opinions on urban environmental policies. Six hundred and seven individuals responded to the survey. Respondents were distributed across 306 US cities of which 20 participants were from rural cities. Of all the respondents, 44 used ChatGPT for health-related queries and decision-making. In the healthcare context, the most effective model highlights 'Competent + Trustworthy + ChatGPT for healthcare queries', underscoring the critical importance of perceived competence and trustworthiness specifically in the realm of healthcare applications of ChatGPT. On the other hand, the non-healthcare context reveals a broader spectrum of influential factors in its best model, which includes 'Trustworthy + Secure + Benefits outweigh risks + Satisfaction + Willing to take decisions + Intent to use + Persuasive'. In conclusion our study findings suggest a clear demarcation in user expectations and requirements from AI systems based on the context of their use. We advocate for a balanced approach where technological advancement and user readiness are harmonized.
随着 ChatGPT 成为医疗决策中的潜在盟友,调查用户如何利用和感知它是至关重要的。技术的再利用具有创新性,但也带来了风险,特别是因为人工智能的有效性取决于它所使用的数据。在医疗保健领域,ChatGPT 可能会根据当前的医学知识提供合理的建议,但如果它的数据来源后来包含错误信息,这些建议可能会变成错误信息。我们的研究评估了用户对 ChatGPT 的看法,特别是那些将 ChatGPT 用于医疗相关查询的用户。通过检查 ChatGPT 的能力、可靠性、透明度、可信度、安全性和说服力等因素,研究旨在了解用户如何依赖 ChatGPT 进行与健康相关的决策。我们通过每月至少使用一次 ChatGPT 的美国成年人进行了一项基于网络的调查。贝叶斯线性回归用于了解 ChatGPT 在帮助做出明智决策方面的作用有多大。该分析是在使用 ChatGPT 进行医疗保健决策的受访者和未使用 ChatGPT 进行医疗保健决策的受访者两个子集中进行的。使用内容分析法对开放式问题的定性数据进行了分析,并使用主题编码提取了公众对城市环境政策的意见。共有 607 人对调查做出了回应。受访者分布在 306 个美国城市,其中 20 名来自农村城市。在所有受访者中,44 人将 ChatGPT 用于与健康相关的查询和决策。在医疗保健环境中,最有效的模型突出了“能力+值得信赖+用于医疗保健查询的 ChatGPT”,强调了在医疗保健应用中感知能力和值得信赖性的至关重要性。另一方面,非医疗保健环境在其最佳模型中揭示了更广泛的影响因素,包括“值得信赖+安全+利益大于风险+满意度+愿意做出决策+意图使用+有说服力”。总之,我们的研究结果表明,用户对人工智能系统的期望和要求根据其使用的上下文存在明显的区别。我们主张采取一种平衡的方法,使技术进步和用户准备程度协调一致。