• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相似文献

1
Divergences between Language Models and Human Brains.语言模型与人类大脑之间的差异。
Adv Neural Inf Process Syst. 2024;37:137999-138031.
2
Divergences between Language Models and Human Brains.语言模型与人类大脑之间的差异。
ArXiv. 2025 Jan 13:arXiv:2311.09308v3.
3
Language in Brains, Minds, and Machines.语言在大脑、心智和机器中。
Annu Rev Neurosci. 2024 Aug;47(1):277-301. doi: 10.1146/annurev-neuro-120623-101142. Epub 2024 Jul 1.
4
Health Care Language Models and Their Fine-Tuning for Information Extraction: Scoping Review.医疗保健语言模型及其在信息提取方面的微调:范围综述。
JMIR Med Inform. 2024 Oct 21;12:e60164. doi: 10.2196/60164.
5
Relation Extraction from Clinical Narratives Using Pre-trained Language Models.使用预训练语言模型从临床叙述中提取关系
AMIA Annu Symp Proc. 2020 Mar 4;2019:1236-1245. eCollection 2019.
6
Dissociable Neural Mechanisms for Human Inference Processing Predicted by Static and Contextual Language Models.静态和上下文语言模型预测的人类推理处理的可分离神经机制。
Neurobiol Lang (Camb). 2024 Apr 1;5(1):248-263. doi: 10.1162/nol_a_00090. eCollection 2024.
7
A 10-hour within-participant magnetoencephalography narrative dataset to test models of language comprehension.一项 10 小时内参与者的脑磁图叙事数据集,用于测试语言理解模型。
Sci Data. 2022 Jun 8;9(1):278. doi: 10.1038/s41597-022-01382-7.
8
Brains and algorithms partially converge in natural language processing.大脑和算法在自然语言处理中部分融合。
Commun Biol. 2022 Feb 16;5(1):134. doi: 10.1038/s42003-022-03036-1.
9
Neural correlates of word representation vectors in natural language processing models: Evidence from representational similarity analysis of event-related brain potentials.自然语言处理模型中单词表示向量的神经关联:事件相关脑电位的代表性相似性分析证据。
Psychophysiology. 2022 Mar;59(3):e13976. doi: 10.1111/psyp.13976. Epub 2021 Nov 24.
10
A novel method of language modeling for automatic captioning in TC video teleconferencing.一种用于远程医疗视频电话会议自动字幕的新型语言建模方法。
IEEE Trans Inf Technol Biomed. 2007 May;11(3):332-7. doi: 10.1109/titb.2006.885549.

引用本文的文献

1
Low-Rank Tensor Encoding Models Decompose Natural Speech Comprehension Processes.低秩张量编码模型分解自然语音理解过程。
bioRxiv. 2025 Jun 3:2025.06.02.657514. doi: 10.1101/2025.06.02.657514.

本文引用的文献

1
Scaling laws for language encoding models in fMRI.功能磁共振成像中语言编码模型的标度律
Adv Neural Inf Process Syst. 2023;36:21895-21907.
2
Language models, like humans, show content effects on reasoning tasks.语言模型和人类一样,在推理任务中表现出内容效应。
PNAS Nexus. 2024 Jul 16;3(7):pgae233. doi: 10.1093/pnasnexus/pgae233. eCollection 2024 Jul.
3
Predictive Coding or Just Feature Discovery? An Alternative Account of Why Language Models Fit Brain Data.预测编码还是仅仅是特征发现?关于语言模型为何符合大脑数据的另一种解释。
Neurobiol Lang (Camb). 2024 Apr 1;5(1):64-79. doi: 10.1162/nol_a_00087. eCollection 2024.
4
Dissociating language and thought in large language models.大语言模型中的语言与思维分离。
Trends Cogn Sci. 2024 Jun;28(6):517-540. doi: 10.1016/j.tics.2024.01.011. Epub 2024 Mar 19.
5
The cortical representation of language timescales is shared between reading and listening.语言的皮质代表时间尺度在阅读和听力之间是共享的。
Commun Biol. 2024 Mar 7;7(1):284. doi: 10.1038/s42003-024-05909-z.
6
A natural language fMRI dataset for voxelwise encoding models.基于体素的编码模型的自然语言 fMRI 数据集。
Sci Data. 2023 Aug 23;10(1):555. doi: 10.1038/s41597-023-02437-z.
7
Evidence of a predictive coding hierarchy in the human brain listening to speech.人类大脑在听语音时存在预测编码层级的证据。
Nat Hum Behav. 2023 Mar;7(3):430-441. doi: 10.1038/s41562-022-01516-2. Epub 2023 Mar 2.
8
Combining computational controls with natural text reveals aspects of meaning composition.将计算控制与自然文本相结合揭示了意义构成的各个方面。
Nat Comput Sci. 2022 Nov;2(11):745-757. doi: 10.1038/s43588-022-00354-6. Epub 2022 Nov 28.
9
Deep language algorithms predict semantic comprehension from brain activity.深度语言算法可以根据大脑活动预测语义理解。
Sci Rep. 2022 Sep 29;12(1):16327. doi: 10.1038/s41598-022-20460-9.
10
At the Neural Intersection Between Language and Emotion.在语言与情感的神经交叉点上。
Affect Sci. 2021 Mar 20;2(2):207-220. doi: 10.1007/s42761-021-00032-2. eCollection 2021 Jun.

语言模型与人类大脑之间的差异。

Divergences between Language Models and Human Brains.

作者信息

Zhou Yuchen, Liu Emmy, Neubig Graham, Tarr Michael J, Wehbe Leila

机构信息

Carnegie Mellon University.

出版信息

Adv Neural Inf Process Syst. 2024;37:137999-138031.

PMID:40433349
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12108097/
Abstract

Do machines and humans process language in similar ways? Recent research has hinted at the affirmative, showing that human neural activity can be effectively predicted using the internal representations of language models (LMs). Although such results are thought to reflect shared computational principles between LMs and human brains, there are also clear differences in how LMs and humans represent and use language. In this work, we systematically explore the divergences between human and machine language processing by examining the differences between LM representations and human brain responses to language as measured by Magnetoencephalography (MEG) across two datasets in which subjects read and listened to narrative stories. Using an LLM-based data-driven approach, we identify two domains that LMs do not capture well: and . We validate these findings with human behavioral experiments and hypothesize that the gap is due to insufficient representations of social/emotional and physical knowledge in LMs. Our results show that fine-tuning LMs on these domains can improve their alignment with human brain responses.

摘要

机器和人类处理语言的方式相似吗?最近的研究暗示答案是肯定的,研究表明使用语言模型(LMs)的内部表征可以有效地预测人类神经活动。尽管这些结果被认为反映了语言模型和人类大脑之间共享的计算原理,但在语言模型和人类如何表征和使用语言方面也存在明显差异。在这项工作中,我们通过检查语言模型表征与人类大脑对语言的反应(通过脑磁图(MEG)测量)之间的差异,系统地探索人类和机器语言处理之间的差异,该差异是在两个数据集中测量的,在这两个数据集中,受试者阅读和聆听叙事故事。使用基于大型语言模型的数据驱动方法,我们确定了语言模型不能很好捕捉的两个领域: 和 。我们通过人类行为实验验证了这些发现,并假设这种差距是由于语言模型中社会/情感和物理知识的表征不足所致。我们的结果表明,在这些领域对语言模型进行微调可以改善它们与人类大脑反应的一致性。