• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

双词测试作为大语言模型的语义基准。

The Two Word Test as a semantic benchmark for large language models.

作者信息

Riccardi Nicholas, Yang Xuan, Desai Rutvik H

机构信息

Department of Communication Sciences and Disorders, University of South Carolina, Columbia, 29208, USA.

Department of Psychology, University of South Carolina, Columbia, 29208, USA.

出版信息

Sci Rep. 2024 Sep 16;14(1):21593. doi: 10.1038/s41598-024-72528-3.

DOI:10.1038/s41598-024-72528-3
PMID:39284863
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11405709/
Abstract

Large language models (LLMs) have shown remarkable abilities recently, including passing advanced professional exams and demanding benchmark tests. This performance has led many to suggest that they are close to achieving humanlike or "true" understanding of language, and even artificial general intelligence (AGI). Here, we provide a new open-source benchmark, the Two Word Test (TWT), that can assess semantic abilities of LLMs using two-word phrases in a task that can be performed relatively easily by humans without advanced training. Combining multiple words into a single concept is a fundamental linguistic and conceptual operation routinely performed by people. The test requires meaningfulness judgments of 1768 noun-noun combinations that have been rated as meaningful (e.g., baby boy) or as having low meaningfulness (e.g., goat sky) by human raters. This novel test differs from existing benchmarks that rely on logical reasoning, inference, puzzle-solving, or domain expertise. We provide versions of the task that probe meaningfulness ratings on a 0-4 scale as well as binary judgments. With both versions, we conducted a series of experiments using the TWT on GPT-4, GPT-3.5, Claude-3-Optus, and Gemini-1-Pro-001. Results demonstrated that, compared to humans, all models performed relatively poorly at rating meaningfulness of these phrases. GPT-3.5-turbo, Gemini-1.0-Pro-001 and GPT-4-turbo were also unable to make binary discriminations between sensible and nonsense phrases, with these models consistently judging nonsensical phrases as making sense. Claude-3-Opus made a substantial improvement in binary discrimination of combinatorial phrases but was still significantly worse than human performance. The TWT can be used to understand and assess the limitations of current LLMs, and potentially improve them. The test also reminds us that caution is warranted in attributing "true" or human-level understanding to LLMs based only on tests that are challenging for humans.

摘要

大型语言模型(LLMs)最近展现出了卓越的能力,包括通过高级专业考试和要求苛刻的基准测试。这种表现使得许多人认为它们已接近实现类人或“真正”的语言理解,甚至是通用人工智能(AGI)。在此,我们提供了一个新的开源基准测试——双词测试(TWT),它可以在一项人类无需经过高级训练就能相对轻松完成的任务中,使用双词短语来评估大型语言模型的语义能力。将多个单词组合成一个单一概念是人类日常进行的一项基本语言和概念操作。该测试要求对1768个名词 - 名词组合进行有意义性判断,这些组合已由人类评分者评定为有意义(例如,男婴)或低意义(例如,山羊天空)。这项新颖的测试不同于现有的依赖逻辑推理、推理、解谜或领域专业知识的基准测试。我们提供了该任务的版本,用于在0 - 4量表上探究有意义性评分以及二元判断。使用这两个版本,我们在GPT - 4、GPT - 3.5、Claude - 3 - Optus和Gemini - 1 - Pro - 001上使用TWT进行了一系列实验。结果表明,与人类相比,所有模型在对这些短语的有意义性评分方面表现相对较差。GPT - 3.5 - turbo、Gemini - 1.0 - Pro - 001和GPT - 4 - turbo也无法对合理和无意义短语进行二元区分,这些模型一直将无意义短语判断为有意义。Claude - 3 - Opus在组合短语的二元区分方面有了显著改进,但仍明显不如人类表现。TWT可用于理解和评估当前大型语言模型的局限性,并有可能对其进行改进。该测试还提醒我们,仅基于对人类具有挑战性的测试就将“真正”或人类水平的理解归因于大型语言模型时,需要谨慎行事。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/39c0/11405709/53f781faf2e5/41598_2024_72528_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/39c0/11405709/ec1a0d9599e8/41598_2024_72528_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/39c0/11405709/e2a17386b614/41598_2024_72528_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/39c0/11405709/d2cb28fb4b0f/41598_2024_72528_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/39c0/11405709/52ed22e6f729/41598_2024_72528_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/39c0/11405709/53f781faf2e5/41598_2024_72528_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/39c0/11405709/ec1a0d9599e8/41598_2024_72528_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/39c0/11405709/e2a17386b614/41598_2024_72528_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/39c0/11405709/d2cb28fb4b0f/41598_2024_72528_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/39c0/11405709/52ed22e6f729/41598_2024_72528_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/39c0/11405709/53f781faf2e5/41598_2024_72528_Fig5_HTML.jpg

相似文献

1
The Two Word Test as a semantic benchmark for large language models.双词测试作为大语言模型的语义基准。
Sci Rep. 2024 Sep 16;14(1):21593. doi: 10.1038/s41598-024-72528-3.
2
Evaluating the Capabilities of Generative AI Tools in Understanding Medical Papers: Qualitative Study.评估生成式人工智能工具理解医学论文的能力:定性研究
JMIR Med Inform. 2024 Sep 4;12:e59258. doi: 10.2196/59258.
3
Diagnostic performances of GPT-4o, Claude 3 Opus, and Gemini 1.5 Pro in "Diagnosis Please" cases.GPT-4o、Claude 3 Opus 和 Gemini 1.5 Pro 在“诊断请”案例中的诊断性能。
Jpn J Radiol. 2024 Nov;42(11):1231-1235. doi: 10.1007/s11604-024-01619-y. Epub 2024 Jul 1.
4
Accuracy of Large Language Models for Infective Endocarditis Prophylaxis in Dental Procedures.大型语言模型在牙科手术中预防感染性心内膜炎的准确性。
Int Dent J. 2025 Feb;75(1):206-212. doi: 10.1016/j.identj.2024.09.033. Epub 2024 Oct 12.
5
Large Language Models for Simplified Interventional Radiology Reports: A Comparative Analysis.用于简化介入放射学报告的大语言模型:一项比较分析
Acad Radiol. 2025 Feb;32(2):888-898. doi: 10.1016/j.acra.2024.09.041. Epub 2024 Sep 30.
6
Evaluating Large Language Models for the National Premedical Exam in India: Comparative Analysis of GPT-3.5, GPT-4, and Bard.评估印度全国医预考用大型语言模型:GPT-3.5、GPT-4 和 Bard 的比较分析。
JMIR Med Educ. 2024 Feb 21;10:e51523. doi: 10.2196/51523.
7
Challenging large language models' "" with human tools: A neuropsychological investigation in Italian language on prefrontal functioning.运用人类工具挑战大型语言模型的“”:一项关于意大利语前额叶功能的神经心理学研究。 注:原文中“Challenging large language models' "" with human tools”这里双引号里内容缺失,翻译可能不太准确,需结合完整原文进一步理解。
Heliyon. 2024 Oct 3;10(19):e38911. doi: 10.1016/j.heliyon.2024.e38911. eCollection 2024 Oct 15.
8
Evaluating text and visual diagnostic capabilities of large language models on questions related to the Breast Imaging Reporting and Data System Atlas 5 edition.评估大语言模型在与《乳腺影像报告和数据系统》第5版相关问题上的文本和视觉诊断能力。
Diagn Interv Radiol. 2025 Mar 3;31(2):111-129. doi: 10.4274/dir.2024.242876. Epub 2024 Sep 9.
9
Benchmarking Human-AI collaboration for common evidence appraisal tools.针对常见证据评估工具的人机协作基准测试。
J Clin Epidemiol. 2024 Nov;175:111533. doi: 10.1016/j.jclinepi.2024.111533. Epub 2024 Sep 12.
10
Noun-noun combination: meaningfulness ratings and lexical statistics for 2,160 word pairs.名词-名词组合:2160 对单词对的有意义评分和词汇统计。
Behav Res Methods. 2013 Jun;45(2):463-9. doi: 10.3758/s13428-012-0256-3.

本文引用的文献

1
Dissociating language and thought in large language models.大语言模型中的语言与思维分离。
Trends Cogn Sci. 2024 Jun;28(6):517-540. doi: 10.1016/j.tics.2024.01.011. Epub 2024 Mar 19.
2
Conceptual Combination in the LATL With and Without Syntactic Composition.有句法组合和没有句法组合情况下的语言类比中的概念组合
Neurobiol Lang (Camb). 2022 Feb 10;3(1):46-66. doi: 10.1162/nol_a_00048. eCollection 2022.
3
The debate over understanding in AI's large language models.人工智能大型语言模型中的理解之争。
Proc Natl Acad Sci U S A. 2023 Mar 28;120(13):e2215907120. doi: 10.1073/pnas.2215907120. Epub 2023 Mar 21.
4
Performance of ChatGPT on USMLE: Potential for AI-assisted medical education using large language models.ChatGPT在美国医师执照考试中的表现:使用大语言模型进行人工智能辅助医学教育的潜力。
PLOS Digit Health. 2023 Feb 9;2(2):e0000198. doi: 10.1371/journal.pdig.0000198. eCollection 2023 Feb.
5
SCOPE: The South Carolina psycholinguistic metabase.范围:南卡罗来纳州心理语言学元数据库。
Behav Res Methods. 2023 Sep;55(6):2853-2884. doi: 10.3758/s13428-022-01934-0. Epub 2022 Aug 15.
6
The neural basis of combinatory syntax and semantics.组合语法和语义的神经基础。
Science. 2019 Oct 4;366(6461):62-66. doi: 10.1126/science.aax0050.
7
Noun-noun combination: meaningfulness ratings and lexical statistics for 2,160 word pairs.名词-名词组合:2160 对单词对的有意义评分和词汇统计。
Behav Res Methods. 2013 Jun;45(2):463-9. doi: 10.3758/s13428-012-0256-3.
8
Neural correlates of implicit and explicit combinatorial semantic processing.内隐和外显组合语义加工的神经关联。
Neuroimage. 2010 Nov 1;53(2):638-46. doi: 10.1016/j.neuroimage.2010.06.055. Epub 2010 Jun 28.
9
Relation and lexical priming during the interpretation of noun-noun combinations.名词-名词组合解读过程中的关系与词汇启动
J Exp Psychol Learn Mem Cogn. 2001 Jan;27(1):236-54.