• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

具有关联网络的稳态潜在关系。

Latent Relations at Steady-state with Associative Nets.

机构信息

School of Psychological Sciences, The University of Melbourne.

Department of Cognitive Sciences, Hanyang University.

出版信息

Cogn Sci. 2024 Sep;48(9):e13494. doi: 10.1111/cogs.13494.

DOI:10.1111/cogs.13494
PMID:39283248
Abstract

Models of word meaning that exploit patterns of word usage across large text corpora to capture semantic relations, like the topic model and word2vec, condense word-by-context co-occurrence statistics to induce representations that organize words along semantically relevant dimensions (e.g., synonymy, antonymy, hyponymy, etc.). However, their reliance on latent representations leaves them vulnerable to interference, makes them slow learners, and commits to a dual-systems account of episodic and semantic memory. We show how it is possible to construct the meaning of words online during retrieval to avoid these limitations. We implement a spreading activation account of word meaning in an associative net, a one-layer highly recurrent network of associations, called a Dynamic-Eigen-Net, that we developed to address the limitations of earlier variants of associative nets when scaling up to deal with unstructured input domains like natural language text. We show that spreading activation using a one-hot coded Dynamic-Eigen-Net outperforms the topic model and reaches similar levels of performance as word2vec when predicting human free associations and word similarity ratings. Latent Semantic Analysis vectors reached similar levels of performance when constructed by applying dimensionality reduction to the Shifted Positive Pointwise Mutual Information but showed poorer predictability for free associations when using an entropy-based normalization. An analysis of the rate at which the Dynamic-Eigen-Net reaches asymptotic performance shows that it learns faster than word2vec. We argue in favor of the Dynamic-Eigen-Net as a fast learner, with a single-store, that is not subject to catastrophic interference. We present it as an alternative to instance models when delegating the induction of latent relationships to process assumptions instead of assumptions about representation.

摘要

利用大型文本语料库中的词汇使用模式来捕捉语义关系的词汇意义模型,如主题模型和 word2vec,将词与上下文共现的统计数据压缩为表示形式,从而沿着语义相关维度组织词汇(例如同义词、反义词、上下位词等)。然而,它们对潜在表示的依赖使其容易受到干扰,使它们成为缓慢的学习者,并承诺对情节和语义记忆进行双重系统解释。我们展示了如何在检索过程中在线构建单词的含义,以避免这些限制。我们在关联网络中实现了词汇意义的扩展激活,关联网络是一种单层的高度递归关联网络,称为动态特征网络,我们开发了它来解决在扩展到处理非结构化输入领域(如自然语言文本)时,关联网络的早期变体所面临的限制。我们表明,使用独热编码的动态特征网络进行扩展激活在预测人类自由联想和单词相似性评分方面优于主题模型,并达到与 word2vec 相似的性能水平。通过将降维应用于移位正点互信息来构建的潜在语义分析向量在构造相似水平的性能时,当使用基于熵的归一化时,其对自由联想的预测能力较差。对动态特征网络达到渐近性能的速度的分析表明,它比 word2vec 学习更快。我们赞成动态特征网络作为一种快速学习者,具有单一存储,不会受到灾难性干扰。当将潜在关系的归纳委托给过程假设而不是表示假设时,我们将其作为实例模型的替代方案。

相似文献

1
Latent Relations at Steady-state with Associative Nets.具有关联网络的稳态潜在关系。
Cogn Sci. 2024 Sep;48(9):e13494. doi: 10.1111/cogs.13494.
2
Deep Artificial Neural Networks Reveal a Distributed Cortical Network Encoding Propositional Sentence-Level Meaning.深度人工神经网络揭示命题句级意义的分布式皮层网络编码。
J Neurosci. 2021 May 5;41(18):4100-4119. doi: 10.1523/JNEUROSCI.1152-20.2021. Epub 2021 Mar 22.
3
Semantic Memory Search and Retrieval in a Novel Cooperative Word Game: A Comparison of Associative and Distributional Semantic Models.语义记忆搜索和新型合作词汇游戏中的检索:联想语义模型与分布语义模型的比较。
Cogn Sci. 2021 Oct;45(10):e13053. doi: 10.1111/cogs.13053.
4
The interpretation of dream meaning: Resolving ambiguity using Latent Semantic Analysis in a small corpus of text.梦的意义解读:在一小部分文本语料库中使用潜在语义分析解决歧义。
Conscious Cogn. 2017 Nov;56:178-187. doi: 10.1016/j.concog.2017.09.004. Epub 2017 Sep 21.
5
Principal semantic components of language and the measurement of meaning.语言的主要语义成分与意义的测量。
PLoS One. 2010 Jun 11;5(6):e10921. doi: 10.1371/journal.pone.0010921.
6
Unraveling lexical semantics in the brain: Comparing internal, external, and hybrid language models.揭示大脑中的词汇语义:比较内部、外部和混合语言模型。
Hum Brain Mapp. 2024 Jan;45(1):e26546. doi: 10.1002/hbm.26546. Epub 2023 Nov 28.
7
Overlap in meaning is a stronger predictor of semantic activation in GPT-3 than in humans.在 GPT-3 中,意义重叠比人类更能预测语义激活。
Sci Rep. 2023 Mar 28;13(1):5035. doi: 10.1038/s41598-023-32248-6.
8
Traces of Meaning Itself: Encoding Distributional Word Vectors in Brain Activity.意义本身的痕迹:在大脑活动中编码分布式词向量
Neurobiol Lang (Camb). 2020 Mar 1;1(1):54-76. doi: 10.1162/nol_a_00003. eCollection 2020.
9
Latent structure in measures of associative, semantic, and thematic knowledge.联想、语义和主题知识测量中的潜在结构。
Psychon Bull Rev. 2008 Jun;15(3):598-603. doi: 10.3758/pbr.15.3.598.
10
Unsupervised low-dimensional vector representations for words, phrases and text that are transparent, scalable, and produce similarity metrics that are not redundant with neural embeddings.用于单词、短语和文本的无监督低维向量表示,具有透明性、可扩展性,并能产生与神经嵌入不冗余的相似性度量。
J Biomed Inform. 2019 Feb;90:103096. doi: 10.1016/j.jbi.2019.103096. Epub 2019 Jan 14.