• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

一种基于微柱自提名的分层时间记忆快速空间池学习算法。

A Fast Spatial Pool Learning Algorithm of Hierarchical Temporal Memory Based on Minicolumn's Self-Nomination.

作者信息

Li Lei, Zou Tingting, Cai Tao, Niu Dejiao, Zhu Yuquan

机构信息

Department of Computer Science and Communication Engineering, Jiangsu University, Zhenjiang, China.

出版信息

Comput Intell Neurosci. 2021 Mar 17;2021:6680833. doi: 10.1155/2021/6680833. eCollection 2021.

DOI:10.1155/2021/6680833
PMID:33790959
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC7994094/
Abstract

As a new type of artificial neural network model, HTM has become the focus of current research and application. The sparse distributed representation is the basis of the HTM model, but the existing spatial pool learning algorithms have high training time overhead and may cause the spatial pool to become unstable. To overcome these disadvantages, we propose a fast spatial pool learning algorithm of HTM based on minicolumn's nomination, where the minicolumns are selected according to the load-carrying capacity and the synapses are adjusted using compressed encoding. We have implemented the prototype of the algorithm and carried out experiments on three datasets. It is verified that the training time overhead of the proposed algorithm is almost unaffected by the encoding length, and the spatial pool becomes stable after fewer iterations of training. Moreover, the training of the new input does not affect the already trained results.

摘要

作为一种新型的人工神经网络模型,HTM已成为当前研究和应用的焦点。稀疏分布式表示是HTM模型的基础,但现有的空间池学习算法训练时间开销大,可能导致空间池不稳定。为克服这些缺点,我们提出了一种基于微柱提名的HTM快速空间池学习算法,根据承载能力选择微柱,并使用压缩编码调整突触。我们实现了该算法的原型,并在三个数据集上进行了实验。验证了所提算法的训练时间开销几乎不受编码长度的影响,经过较少的训练迭代后空间池变得稳定。此外,新输入的训练不影响已训练的结果。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/28cc/7994094/903391d05aee/CIN2021-6680833.013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/28cc/7994094/89f0d5e19200/CIN2021-6680833.001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/28cc/7994094/a6edd50d3e6a/CIN2021-6680833.002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/28cc/7994094/12c8775713eb/CIN2021-6680833.003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/28cc/7994094/f9381fe76c3f/CIN2021-6680833.004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/28cc/7994094/53343871b23e/CIN2021-6680833.005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/28cc/7994094/0414d8b549ee/CIN2021-6680833.006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/28cc/7994094/8a672a31dcf6/CIN2021-6680833.007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/28cc/7994094/2b67b6cebc54/CIN2021-6680833.008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/28cc/7994094/0175fc2d92da/CIN2021-6680833.009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/28cc/7994094/3b8373e16cbb/CIN2021-6680833.010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/28cc/7994094/9f4371ed3156/CIN2021-6680833.011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/28cc/7994094/3afce44db9b8/CIN2021-6680833.012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/28cc/7994094/903391d05aee/CIN2021-6680833.013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/28cc/7994094/89f0d5e19200/CIN2021-6680833.001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/28cc/7994094/a6edd50d3e6a/CIN2021-6680833.002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/28cc/7994094/12c8775713eb/CIN2021-6680833.003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/28cc/7994094/f9381fe76c3f/CIN2021-6680833.004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/28cc/7994094/53343871b23e/CIN2021-6680833.005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/28cc/7994094/0414d8b549ee/CIN2021-6680833.006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/28cc/7994094/8a672a31dcf6/CIN2021-6680833.007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/28cc/7994094/2b67b6cebc54/CIN2021-6680833.008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/28cc/7994094/0175fc2d92da/CIN2021-6680833.009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/28cc/7994094/3b8373e16cbb/CIN2021-6680833.010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/28cc/7994094/9f4371ed3156/CIN2021-6680833.011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/28cc/7994094/3afce44db9b8/CIN2021-6680833.012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/28cc/7994094/903391d05aee/CIN2021-6680833.013.jpg

相似文献

1
A Fast Spatial Pool Learning Algorithm of Hierarchical Temporal Memory Based on Minicolumn's Self-Nomination.一种基于微柱自提名的分层时间记忆快速空间池学习算法。
Comput Intell Neurosci. 2021 Mar 17;2021:6680833. doi: 10.1155/2021/6680833. eCollection 2021.
2
A New Hierarchical Temporal Memory Algorithm Based on Activation Intensity.基于激活强度的新型层次时间记忆算法。
Comput Intell Neurosci. 2022 Jan 24;2022:6072316. doi: 10.1155/2022/6072316. eCollection 2022.
3
The HTM Spatial Pooler-A Neocortical Algorithm for Online Sparse Distributed Coding.HTM空间池化器——一种用于在线稀疏分布式编码的新皮层算法。
Front Comput Neurosci. 2017 Nov 29;11:111. doi: 10.3389/fncom.2017.00111. eCollection 2017.
4
Anomalous Behavior Detection Framework Using HTM-Based Semantic Folding Technique.基于 HTM 的语义折叠技术的异常行为检测框架。
Comput Math Methods Med. 2021 Mar 16;2021:5585238. doi: 10.1155/2021/5585238. eCollection 2021.
5
HTM Spatial Pooler With Memristor Crossbar Circuits for Sparse Biometric Recognition.用于稀疏生物特征识别的具有忆阻器交叉开关电路的HTM空间池化器
IEEE Trans Biomed Circuits Syst. 2017 Jun;11(3):640-651. doi: 10.1109/TBCAS.2016.2641983. Epub 2017 Mar 1.
6
Incremental learning by message passing in hierarchical temporal memory.通过分层时间记忆中的消息传递进行增量学习。
Neural Comput. 2014 Aug;26(8):1763-809. doi: 10.1162/NECO_a_00617. Epub 2014 May 30.
7
Who is the Winner? Memristive-CMOS Hybrid Modules: CNN-LSTM Versus HTM.谁是赢家?忆阻-CMOS 混合模块:CNN-LSTM 与 HTM 之比较。
IEEE Trans Biomed Circuits Syst. 2020 Apr;14(2):164-172. doi: 10.1109/TBCAS.2019.2956435. Epub 2019 Nov 28.
8
Hierarchical Temporal Memory Based on Spin-Neurons and Resistive Memory for Energy-Efficient Brain-Inspired Computing.基于自旋神经元和阻变存储器的分层时间记忆用于节能的类脑计算。
IEEE Trans Neural Netw Learn Syst. 2016 Sep;27(9):1907-19. doi: 10.1109/TNNLS.2015.2462731. Epub 2015 Aug 14.
9
A Dendritic Neuron Model with Adaptive Synapses Trained by Differential Evolution Algorithm.基于差分进化算法训练的具有自适应突触的树突状神经元模型。
Comput Intell Neurosci. 2020 Jan 17;2020:2710561. doi: 10.1155/2020/2710561. eCollection 2020.
10
Continuous Online Sequence Learning with an Unsupervised Neural Network Model.使用无监督神经网络模型的连续在线序列学习
Neural Comput. 2016 Nov;28(11):2474-2504. doi: 10.1162/NECO_a_00893. Epub 2016 Sep 14.

本文引用的文献

1
A Framework for Intelligence and Cortical Function Based on Grid Cells in the Neocortex.基于新皮层网格细胞的智能和皮质功能框架。
Front Neural Circuits. 2019 Jan 11;12:121. doi: 10.3389/fncir.2018.00121. eCollection 2018.
2
Dendritic Neuron Model With Effective Learning Algorithms for Classification, Approximation, and Prediction.具有有效学习算法的树突状神经元模型用于分类、逼近和预测。
IEEE Trans Neural Netw Learn Syst. 2019 Feb;30(2):601-614. doi: 10.1109/TNNLS.2018.2846646. Epub 2018 Jul 10.
3
Why Neurons Have Thousands of Synapses, a Theory of Sequence Memory in Neocortex.
为何神经元拥有数千个突触:新皮层序列记忆理论
Front Neural Circuits. 2016 Mar 30;10:23. doi: 10.3389/fncir.2016.00023. eCollection 2016.