Riccardi Nicholas, Yang Xuan, Desai Rutvik H
Department of Communication Sciences and Disorders, University of South Carolina, Columbia, 29208, USA.
Department of Psychology, University of South Carolina, Columbia, 29208, USA.
Sci Rep. 2024 Sep 16;14(1):21593. doi: 10.1038/s41598-024-72528-3.
Large language models (LLMs) have shown remarkable abilities recently, including passing advanced professional exams and demanding benchmark tests. This performance has led many to suggest that they are close to achieving humanlike or "true" understanding of language, and even artificial general intelligence (AGI). Here, we provide a new open-source benchmark, the Two Word Test (TWT), that can assess semantic abilities of LLMs using two-word phrases in a task that can be performed relatively easily by humans without advanced training. Combining multiple words into a single concept is a fundamental linguistic and conceptual operation routinely performed by people. The test requires meaningfulness judgments of 1768 noun-noun combinations that have been rated as meaningful (e.g., baby boy) or as having low meaningfulness (e.g., goat sky) by human raters. This novel test differs from existing benchmarks that rely on logical reasoning, inference, puzzle-solving, or domain expertise. We provide versions of the task that probe meaningfulness ratings on a 0-4 scale as well as binary judgments. With both versions, we conducted a series of experiments using the TWT on GPT-4, GPT-3.5, Claude-3-Optus, and Gemini-1-Pro-001. Results demonstrated that, compared to humans, all models performed relatively poorly at rating meaningfulness of these phrases. GPT-3.5-turbo, Gemini-1.0-Pro-001 and GPT-4-turbo were also unable to make binary discriminations between sensible and nonsense phrases, with these models consistently judging nonsensical phrases as making sense. Claude-3-Opus made a substantial improvement in binary discrimination of combinatorial phrases but was still significantly worse than human performance. The TWT can be used to understand and assess the limitations of current LLMs, and potentially improve them. The test also reminds us that caution is warranted in attributing "true" or human-level understanding to LLMs based only on tests that are challenging for humans.
大型语言模型(LLMs)最近展现出了卓越的能力,包括通过高级专业考试和要求苛刻的基准测试。这种表现使得许多人认为它们已接近实现类人或“真正”的语言理解,甚至是通用人工智能(AGI)。在此,我们提供了一个新的开源基准测试——双词测试(TWT),它可以在一项人类无需经过高级训练就能相对轻松完成的任务中,使用双词短语来评估大型语言模型的语义能力。将多个单词组合成一个单一概念是人类日常进行的一项基本语言和概念操作。该测试要求对1768个名词 - 名词组合进行有意义性判断,这些组合已由人类评分者评定为有意义(例如,男婴)或低意义(例如,山羊天空)。这项新颖的测试不同于现有的依赖逻辑推理、推理、解谜或领域专业知识的基准测试。我们提供了该任务的版本,用于在0 - 4量表上探究有意义性评分以及二元判断。使用这两个版本,我们在GPT - 4、GPT - 3.5、Claude - 3 - Optus和Gemini - 1 - Pro - 001上使用TWT进行了一系列实验。结果表明,与人类相比,所有模型在对这些短语的有意义性评分方面表现相对较差。GPT - 3.5 - turbo、Gemini - 1.0 - Pro - 001和GPT - 4 - turbo也无法对合理和无意义短语进行二元区分,这些模型一直将无意义短语判断为有意义。Claude - 3 - Opus在组合短语的二元区分方面有了显著改进,但仍明显不如人类表现。TWT可用于理解和评估当前大型语言模型的局限性,并有可能对其进行改进。该测试还提醒我们,仅基于对人类具有挑战性的测试就将“真正”或人类水平的理解归因于大型语言模型时,需要谨慎行事。