• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于脉冲神经网络中循环连接的持续熟悉度解码

Continual familiarity decoding from recurrent connections in spiking networks.

作者信息

Zemliak Viktoria, Pipa Gordon, Nieters Pascal

机构信息

Institute of Cognitive Science, Osnabrück University, Osnabrück, Germany.

Frankfurt Institute of Advanced Studies, Frankfurt, Germany.

出版信息

PLoS Comput Biol. 2025 Aug 1;21(8):e1013304. doi: 10.1371/journal.pcbi.1013304. eCollection 2025 Aug.

DOI:10.1371/journal.pcbi.1013304
PMID:40749040
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC12334059/
Abstract

Familiarity memory enables recognition of previously encountered inputs as familiar without recalling detailed stimuli information, which supports adaptive behavior across various timescales. We present a spiking neural network model with lateral connectivity shaped by unsupervised spike-timing-dependent plasticity (STDP) that encodes familiarity via local plasticity events. We show that familiarity can be decoded from network activity using both frequency (spike count) and temporal (spike synchrony) characteristics of spike trains. Temporal coding demonstrates enhanced performance under sparse input conditions, consistent with the principles of sparse coding observed in the brain. We also show how connectivity structure supports each decoding strategy, revealing different plasticity regimes. Our approach outperforms LSTM in temporal generalizability on the continual familiarity detection task, with input stimuli being naturally encoded in the recurrent connectivity without a separate training stage.

摘要

熟悉度记忆能够识别先前遇到的输入为熟悉的,而无需回忆详细的刺激信息,这支持了跨各种时间尺度的适应性行为。我们提出了一种具有侧向连接的脉冲神经网络模型,该连接由无监督的脉冲时间依赖可塑性(STDP)塑造,通过局部可塑性事件对熟悉度进行编码。我们表明,可以使用脉冲序列的频率(脉冲计数)和时间(脉冲同步)特征从网络活动中解码熟悉度。时间编码在稀疏输入条件下表现出更高的性能,这与在大脑中观察到的稀疏编码原则一致。我们还展示了连接结构如何支持每种解码策略,揭示了不同的可塑性机制。我们的方法在连续熟悉度检测任务的时间泛化能力上优于长短期记忆网络(LSTM),输入刺激自然地编码在循环连接中,无需单独的训练阶段。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1682/12334059/c2df522c4bad/pcbi.1013304.g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1682/12334059/65bfade3e8f1/pcbi.1013304.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1682/12334059/f96fc4cf2b0b/pcbi.1013304.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1682/12334059/86a761e03e21/pcbi.1013304.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1682/12334059/464b4734d3d5/pcbi.1013304.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1682/12334059/35b4d459088e/pcbi.1013304.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1682/12334059/e2771c3a8366/pcbi.1013304.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1682/12334059/c2df522c4bad/pcbi.1013304.g007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1682/12334059/65bfade3e8f1/pcbi.1013304.g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1682/12334059/f96fc4cf2b0b/pcbi.1013304.g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1682/12334059/86a761e03e21/pcbi.1013304.g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1682/12334059/464b4734d3d5/pcbi.1013304.g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1682/12334059/35b4d459088e/pcbi.1013304.g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1682/12334059/e2771c3a8366/pcbi.1013304.g006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/1682/12334059/c2df522c4bad/pcbi.1013304.g007.jpg

相似文献

1
Continual familiarity decoding from recurrent connections in spiking networks.基于脉冲神经网络中循环连接的持续熟悉度解码
PLoS Comput Biol. 2025 Aug 1;21(8):e1013304. doi: 10.1371/journal.pcbi.1013304. eCollection 2025 Aug.
2
Parvalbumin neurons and cortical coding of dynamic stimuli: a network model.小清蛋白神经元与动态刺激的皮层编码:一种网络模型
J Neurophysiol. 2025 Jul 1;134(1):53-66. doi: 10.1152/jn.00283.2024. Epub 2025 May 13.
3
Minute-Scale Oscillations in Sparse Neural Networks.稀疏神经网络中的分钟级振荡
Hippocampus. 2025 Jul;35(4):e70021. doi: 10.1002/hipo.70021.
4
Short-Term Memory Impairment短期记忆障碍
5
STSF: Spiking Time Sparse Feedback Learning for Spiking Neural Networks.STSF:用于脉冲神经网络的脉冲时间稀疏反馈学习
IEEE Trans Neural Netw Learn Syst. 2025 Jun;36(6):11479-11492. doi: 10.1109/TNNLS.2025.3527700.
6
Real-Time Large-Scale Neural Connectivity Inference on Spiking Neuromorphic System.基于脉冲神经形态系统的实时大规模神经连接推理
IEEE Trans Neural Syst Rehabil Eng. 2025;33:2781-2792. doi: 10.1109/TNSRE.2025.3583057.
7
Selective inhibition in CA3: A mechanism for stable pattern completion through heterosynaptic plasticity.CA3区的选择性抑制:一种通过异突触可塑性实现稳定模式完成的机制。
PLoS Comput Biol. 2025 Jul 7;21(7):e1013267. doi: 10.1371/journal.pcbi.1013267. eCollection 2025 Jul.
8
Manipulation of neuronal activity by an artificial spiking neural network implemented on a closed-loop brain-computer interface in non-human primates.通过在非人类灵长类动物的闭环脑机接口上实现的人工脉冲神经网络对神经元活动进行操控。
J Neural Eng. 2025 Jul 21;22(4):046021. doi: 10.1088/1741-2552/adec1c.
9
Neurons throughout the brain embed robust signatures of their anatomical location into spike trains.大脑中的神经元会将其解剖位置的强大特征嵌入到尖峰序列中。
Elife. 2025 Jun 27;13:RP101506. doi: 10.7554/eLife.101506.
10
Learning predictive signals within a local recurrent circuit.在局部循环回路中学习预测信号。
Proc Natl Acad Sci U S A. 2025 Jul 8;122(27):e2414674122. doi: 10.1073/pnas.2414674122. Epub 2025 Jul 1.

本文引用的文献

1
Predictive Coding Model Detects Novelty on Different Levels of Representation Hierarchy.预测编码模型在不同层次的表征层级上检测新颖性。
Neural Comput. 2025 Jul 17;37(8):1373-1408. doi: 10.1162/neco_a_01769.
2
Strong Anti-Hebbian Plasticity Alters the Convexity of Network Attractor Landscapes.强反赫布可塑性改变网络吸引子景观的凸性。
IEEE Trans Neural Netw Learn Syst. 2025 Sep;36(9):17491-17498. doi: 10.1109/TNNLS.2025.3561217.
3
Similarity-based context aware continual learning for spiking neural networks.基于相似度的上下文感知持续学习的脉冲神经网络
Neural Netw. 2025 Apr;184:107037. doi: 10.1016/j.neunet.2024.107037. Epub 2024 Dec 12.
4
Synapse-type-specific competitive Hebbian learning forms functional recurrent networks.突触类型特异性竞争性赫布学习形成功能性循环网络。
Proc Natl Acad Sci U S A. 2024 Jun 18;121(25):e2305326121. doi: 10.1073/pnas.2305326121. Epub 2024 Jun 13.
5
Spike synchrony as a measure of Gestalt structure.作为格式塔结构度量的尖峰同步性。
Sci Rep. 2024 Mar 11;14(1):5910. doi: 10.1038/s41598-024-54755-w.
6
Computational models can distinguish the contribution from different mechanisms to familiarity recognition.计算模型可以区分熟悉度识别中不同机制的贡献。
Hippocampus. 2024 Jan;34(1):36-50. doi: 10.1002/hipo.23588. Epub 2023 Nov 20.
7
Electrophysiological Signatures of Visual Recognition Memory across All Layers of Mouse V1.在小鼠 V1 的所有层中视觉识别记忆的电生理特征。
J Neurosci. 2023 Nov 1;43(44):7307-7321. doi: 10.1523/JNEUROSCI.0090-23.2023. Epub 2023 Sep 15.
8
Perirhinal cortex automatically tracks multiple types of familiarity regardless of task-relevance.前额叶皮层自动追踪多种熟悉度类型,而不考虑任务相关性。
Neuropsychologia. 2023 Aug 13;187:108600. doi: 10.1016/j.neuropsychologia.2023.108600. Epub 2023 May 29.
9
Continuous learning of spiking networks trained with local rules.基于局部规则训练的尖峰网络的持续学习。
Neural Netw. 2022 Nov;155:512-522. doi: 10.1016/j.neunet.2022.09.003. Epub 2022 Sep 7.
10
Meta-learning synaptic plasticity and memory addressing for continual familiarity detection.元学习突触可塑性和记忆寻址,用于持续的熟悉度检测。
Neuron. 2022 Feb 2;110(3):544-557.e8. doi: 10.1016/j.neuron.2021.11.009. Epub 2021 Dec 2.