Böhm Robert, Jörling Moritz, Reiter Leonhard, Fuchs Christoph
Faculty of Psychology, University of Vienna, Universitätsstrasse 7, 1010, Vienna, Austria.
Department of Psychology and Copenhagen Center for Social Data Science (SODAS), University of Copenhagen, Øster Farimagsgade 2A, 1353, Copenhagen K, Denmark.
Commun Psychol. 2023 Nov 15;1(1):32. doi: 10.1038/s44271-023-00032-x.
The release of ChatGPT and related tools have made generative artificial intelligence (AI) easily accessible for the broader public. We conducted four preregistered experimental studies (total N = 3308; participants from the US) to investigate people's perceptions of generative AI and the advice it generates on how to address societal and personal challenges. The results indicate that when individuals are (vs. are not) aware that the advice was generated by AI, they devalue the author's competence but not the content or the intention to share and follow the advice on how to address societal challenges (Study 1) and personal challenges (Studies 2a and 2b). Study 3 further shows that individuals' preference to receive advice from AI (vs. human experts) increases when they gained positive experience with generative AI advice in the past. The results are discussed regarding the nature of AI aversion in the context of generative AI and beyond.
ChatGPT及相关工具的发布让生成式人工智能(AI)更易于广大公众使用。我们开展了四项预先注册的实验研究(样本总量N = 3308;参与者来自美国),以调查人们对生成式AI的看法,以及它就如何应对社会和个人挑战所给出的建议。结果表明,当个体意识到(与未意识到相比)建议是由AI生成时,他们会贬低建议作者的能力,但不会贬低内容或分享及遵循应对社会挑战(研究1)和个人挑战(研究2a和2b)建议的意图。研究3进一步表明,当个体过去在生成式AI建议方面有过积极体验时,他们更倾向于从AI(而非人类专家)那里获取建议。我们结合生成式AI及其他领域的AI厌恶本质对研究结果进行了讨论。