Suppr超能文献

人类在词汇联想任务中对算法建议的偏好。

Human preferences toward algorithmic advice in a word association task.

机构信息

Department of Supply Chain and Information Management, Northeastern University, Boston, MA, 02115, USA.

Departments of Biomedical Engineering and Biobehavioral Health, Pennsylvania State University, University Park, PA, 16802, USA.

出版信息

Sci Rep. 2022 Aug 25;12(1):14501. doi: 10.1038/s41598-022-18638-2.

Abstract

Algorithms provide recommendations to human decision makers across a variety of task domains. For many problems, humans will rely on algorithmic advice to make their choices and at times will even show complacency. In other cases, humans are mistrustful of algorithmic advice, or will hold algorithms to higher standards of performance. Given the increasing use of algorithms to support creative work such as text generation and brainstorming, it is important to understand how humans will respond to algorithms in those scenarios-will they show appreciation or aversion? This study tests the effects of algorithmic advice for a word association task, the remote associates test (RAT). The RAT task is an established instrument for testing critical and creative thinking with respect to multiple word association. We conducted a preregistered online experiment (154 participants, 2772 observations) to investigate whether humans had stronger reactions to algorithmic or crowd advice when completing multiple instances of the RAT. We used an experimental format in which subjects see a question, answer the question, then receive advice and answer the question a second time. Advice was provided in multiple formats, with advice varying in quality and questions varying in difficulty. We found that individuals receiving algorithmic advice changed their responses 13[Formula: see text] more frequently ([Formula: see text], [Formula: see text]) and reported greater confidence in their final solutions. However, individuals receiving algorithmic advice also were 13[Formula: see text] less likely to identify the correct solution ([Formula: see text], [Formula: see text]). This study highlights both the promises and pitfalls of leveraging algorithms to support creative work.

摘要

算法为人类决策者在各种任务领域提供建议。对于许多问题,人类将依赖算法建议来做出选择,有时甚至会表现出自满。在其他情况下,人类不信任算法建议,或者对算法的性能要求更高。鉴于算法在支持文本生成和头脑风暴等创造性工作中的应用越来越多,了解人类在这些场景中对算法的反应是很重要的——他们会表现出欣赏还是厌恶?本研究测试了算法建议在词联想任务(远程联想测试,RAT)中的效果。RAT 任务是一种用于测试与多个词联想相关的批判性和创造性思维的既定工具。我们进行了一项预先注册的在线实验(154 名参与者,2772 次观察),以调查人类在完成多个 RAT 实例时,对算法或群体建议会有更强的反应。我们使用了一种实验格式,其中受试者看到一个问题,回答问题,然后收到建议并第二次回答问题。建议以多种格式提供,建议的质量不同,问题的难度也不同。我们发现,接受算法建议的个体改变其反应的频率高 13[Formula: see text]([Formula: see text],[Formula: see text]),并报告对最终解决方案的信心更大。然而,接受算法建议的个体也有 13[Formula: see text]([Formula: see text],[Formula: see text])的可能性更小识别出正确的解决方案。这项研究强调了利用算法来支持创造性工作的承诺和陷阱。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/7af2/9411628/0822ddd29925/41598_2022_18638_Fig1_HTML.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验