Suppr超能文献

利用大语言模型在社交情境中识别欺骗者以及如何识别欺骗者:以黑手党游戏为例。

Finding deceivers in social context with large language models and how to find them: the case of the Mafia game.

作者信息

Yoo Byunghwa, Kim Kyung-Joong

机构信息

AI Graduate School, Gwangju Institute of Science and Technology, Gwangju, 61005, Republic of Korea.

School of Integrated Technology, Gwangju Institute of Science and Technology, Gwangju, 61005, Republic of Korea.

出版信息

Sci Rep. 2024 Dec 28;14(1):30946. doi: 10.1038/s41598-024-81997-5.

Abstract

Lies are ubiquitous and often happen in social interactions. However, socially conducted deceptions make it hard to get data since people are unlikely to self-report their intentional deception behaviors, especially malicious ones. Social deduction games, a type of social game where deception is a key gameplay mechanic, can be a good alternative to studying social deceptions. Hence, we utilized large language models' (LLMs) high performance in solving complex scenarios that require reasoning and prompt engineering to detect deceivers in the game of Mafia given only partial information and found such an approach acquired better accuracy than previous BERT-based methods in human data and even surpassed human accuracy. Furthermore, we conducted extensive experiments and analyses to find out the strategies behind LLM's reasoning process so that humans could understand the gist of LLM's strategy.

摘要

谎言无处不在,且常常发生在社交互动中。然而,由于人们不太可能自我报告其故意的欺骗行为,尤其是恶意欺骗行为,因此社会层面的欺骗行为很难获取数据。社交推理游戏是一种以欺骗为关键游戏机制的社交游戏,它可以成为研究社会欺骗行为的一个很好的替代方式。因此,我们利用大语言模型(LLMs)在解决需要推理和提示工程的复杂场景方面的高性能,在仅给出部分信息的情况下,在黑手党游戏中检测欺骗者,发现这种方法在人类数据中比以前基于BERT的方法具有更高的准确率,甚至超过了人类的准确率。此外,我们进行了广泛的实验和分析,以找出大语言模型推理过程背后的策略,以便人类能够理解大语言模型策略的要点。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f76f/11680777/a00462d11d8d/41598_2024_81997_Fig1_HTML.jpg

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验