• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

细胞自动机可降低集体状态计算的内存需求。

Cellular Automata Can Reduce Memory Requirements of Collective-State Computing.

作者信息

Kleyko Denis, Frady Edward Paxon, Sommer Friedrich T

出版信息

IEEE Trans Neural Netw Learn Syst. 2022 Jun;33(6):2701-2713. doi: 10.1109/TNNLS.2021.3119543. Epub 2022 Jun 1.

DOI:10.1109/TNNLS.2021.3119543
PMID:34699370
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC9215349/
Abstract

Various nonclassical approaches of distributed information processing, such as neural networks, reservoir computing (RC), vector symbolic architectures (VSAs), and others, employ the principle of collective-state computing. In this type of computing, the variables relevant in computation are superimposed into a single high-dimensional state vector, the collective state. The variable encoding uses a fixed set of random patterns, which has to be stored and kept available during the computation. In this article, we show that an elementary cellular automaton with rule 90 (CA90) enables the space-time tradeoff for collective-state computing models that use random dense binary representations, i.e., memory requirements can be traded off with computation running CA90. We investigate the randomization behavior of CA90, in particular, the relation between the length of the randomization period and the size of the grid, and how CA90 preserves similarity in the presence of the initialization noise. Based on these analyses, we discuss how to optimize a collective-state computing model, in which CA90 expands representations on the fly from short seed patterns-rather than storing the full set of random patterns. The CA90 expansion is applied and tested in concrete scenarios using RC and VSAs. Our experimental results show that collective-state computing with CA90 expansion performs similarly compared to traditional collective-state models, in which random patterns are generated initially by a pseudorandom number generator and then stored in a large memory.

摘要

各种非经典的分布式信息处理方法,如神经网络、储层计算(RC)、向量符号架构(VSA)等,都采用集体状态计算原理。在这种计算类型中,计算中相关的变量被叠加到一个单一的高维状态向量,即集体状态中。变量编码使用一组固定的随机模式,这些模式必须在计算过程中存储并保持可用。在本文中,我们表明具有规则90的元胞自动机(CA90)能够为使用随机密集二进制表示的集体状态计算模型实现时空权衡,即内存需求可以与运行CA90的计算进行权衡。我们研究了CA90的随机化行为,特别是随机化周期的长度与网格大小之间的关系,以及CA90在存在初始化噪声的情况下如何保持相似性。基于这些分析,我们讨论了如何优化集体状态计算模型,其中CA90从短种子模式即时扩展表示,而不是存储完整的随机模式集。CA90扩展在使用RC和VSA的具体场景中进行了应用和测试。我们的实验结果表明,与传统的集体状态模型相比,使用CA90扩展的集体状态计算性能相似,在传统模型中,随机模式最初由伪随机数生成器生成,然后存储在大容量内存中。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a1e/9215349/c375b5f9b106/nihms-1812448-f0014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a1e/9215349/aef36d480b02/nihms-1812448-f0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a1e/9215349/da41236def1e/nihms-1812448-f0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a1e/9215349/97219b352cc7/nihms-1812448-f0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a1e/9215349/4c26f3645d84/nihms-1812448-f0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a1e/9215349/9ae8a4c6d073/nihms-1812448-f0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a1e/9215349/d0e3c6895eea/nihms-1812448-f0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a1e/9215349/7fef775505ca/nihms-1812448-f0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a1e/9215349/b745d25842a3/nihms-1812448-f0011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a1e/9215349/93b447508d6c/nihms-1812448-f0012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a1e/9215349/4681bcf00923/nihms-1812448-f0013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a1e/9215349/c375b5f9b106/nihms-1812448-f0014.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a1e/9215349/aef36d480b02/nihms-1812448-f0004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a1e/9215349/da41236def1e/nihms-1812448-f0005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a1e/9215349/97219b352cc7/nihms-1812448-f0006.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a1e/9215349/4c26f3645d84/nihms-1812448-f0007.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a1e/9215349/9ae8a4c6d073/nihms-1812448-f0008.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a1e/9215349/d0e3c6895eea/nihms-1812448-f0009.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a1e/9215349/7fef775505ca/nihms-1812448-f0010.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a1e/9215349/b745d25842a3/nihms-1812448-f0011.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a1e/9215349/93b447508d6c/nihms-1812448-f0012.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a1e/9215349/4681bcf00923/nihms-1812448-f0013.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/0a1e/9215349/c375b5f9b106/nihms-1812448-f0014.jpg

相似文献

1
Cellular Automata Can Reduce Memory Requirements of Collective-State Computing.细胞自动机可降低集体状态计算的内存需求。
IEEE Trans Neural Netw Learn Syst. 2022 Jun;33(6):2701-2713. doi: 10.1109/TNNLS.2021.3119543. Epub 2022 Jun 1.
2
Symbolic Computation Using Cellular Automata-Based Hyperdimensional Computing.基于细胞自动机的超维计算的符号计算
Neural Comput. 2015 Dec;27(12):2661-92. doi: 10.1162/NECO_a_00787. Epub 2015 Oct 23.
3
Generalized rough and fuzzy rough automata for semantic computing.用于语义计算的广义粗糙自动机和模糊粗糙自动机
Int J Mach Learn Cybern. 2022;13(12):4013-4032. doi: 10.1007/s13042-022-01637-0. Epub 2022 Sep 21.
4
A modular architecture for transparent computation in recurrent neural networks.循环神经网络中用于透明计算的模块化架构。
Neural Netw. 2017 Jan;85:85-105. doi: 10.1016/j.neunet.2016.09.001. Epub 2016 Sep 24.
5
Computational capabilities of random automata networks for reservoir computing.用于储层计算的随机自动机网络的计算能力。
Phys Rev E Stat Nonlin Soft Matter Phys. 2013 Apr;87(4):042808. doi: 10.1103/PhysRevE.87.042808. Epub 2013 Apr 16.
6
On separating long- and short-term memories in hyperdimensional computing.关于在超维计算中分离长期记忆和短期记忆
Front Neurosci. 2023 Jan 9;16:867568. doi: 10.3389/fnins.2022.867568. eCollection 2022.
7
Reservoir computing with a random memristor crossbar array.基于随机忆阻器交叉阵列的水库计算。
Nanotechnology. 2024 Jul 25;35(41). doi: 10.1088/1361-6528/ad61ee.
8
Vector Symbolic Architectures as a Computing Framework for Emerging Hardware.作为新兴硬件计算框架的向量符号架构
Proc IEEE Inst Electr Electron Eng. 2022 Oct;110(10):1538-1571. Epub 2022 Oct 17.
9
Variable Binding for Sparse Distributed Representations: Theory and Applications.变量绑定的稀疏分布式表示:理论与应用。
IEEE Trans Neural Netw Learn Syst. 2023 May;34(5):2191-2204. doi: 10.1109/TNNLS.2021.3105949. Epub 2023 May 2.
10
Collective computational intelligence in biology - Emergence of memory in somatic tissues.生物学中的集体计算智能——体细胞组织中记忆的出现。
Biosystems. 2023 Jan;223:104816. doi: 10.1016/j.biosystems.2022.104816. Epub 2022 Nov 25.

引用本文的文献

1
Vector Symbolic Architectures as a Computing Framework for Emerging Hardware.作为新兴硬件计算框架的向量符号架构
Proc IEEE Inst Electr Electron Eng. 2022 Oct;110(10):1538-1571. Epub 2022 Oct 17.
2
Cellular automata imbedded memristor-based recirculated logic in-memory computing.基于忆阻器的细胞自动机嵌入循环逻辑的内存计算。
Nat Commun. 2023 May 10;14(1):2695. doi: 10.1038/s41467-023-38299-7.
3
Perceptron Theory Can Predict the Accuracy of Neural Networks.感知机理论可以预测神经网络的准确性。

本文引用的文献

1
Vector Symbolic Architectures as a Computing Framework for Emerging Hardware.作为新兴硬件计算框架的向量符号架构
Proc IEEE Inst Electr Electron Eng. 2022 Oct;110(10):1538-1571. Epub 2022 Oct 17.
2
Perceptron Theory Can Predict the Accuracy of Neural Networks.感知机理论可以预测神经网络的准确性。
IEEE Trans Neural Netw Learn Syst. 2024 Jul;35(7):9885-9899. doi: 10.1109/TNNLS.2023.3237381. Epub 2024 Jul 10.
3
A Highly Energy-Efficient Hyperdimensional Computing Processor for Biosignal Classification.用于生物信号分类的高能效超维度计算处理器。
IEEE Trans Neural Netw Learn Syst. 2024 Jul;35(7):9885-9899. doi: 10.1109/TNNLS.2023.3237381. Epub 2024 Jul 10.
4
On separating long- and short-term memories in hyperdimensional computing.关于在超维计算中分离长期记忆和短期记忆
Front Neurosci. 2023 Jan 9;16:867568. doi: 10.3389/fnins.2022.867568. eCollection 2022.
5
Efficient emotion recognition using hyperdimensional computing with combinatorial channel encoding and cellular automata.使用具有组合通道编码和细胞自动机的超维计算进行高效情感识别。
Brain Inform. 2022 Jun 27;9(1):14. doi: 10.1186/s40708-022-00162-8.
IEEE Trans Biomed Circuits Syst. 2022 Aug;16(4):524-534. doi: 10.1109/TBCAS.2022.3187944. Epub 2022 Oct 13.
4
Variable Binding for Sparse Distributed Representations: Theory and Applications.变量绑定的稀疏分布式表示:理论与应用。
IEEE Trans Neural Netw Learn Syst. 2023 May;34(5):2191-2204. doi: 10.1109/TNNLS.2021.3105949. Epub 2023 May 2.
5
Integer Echo State Networks: Efficient Reservoir Computing for Digital Hardware.整数回声状态网络:用于数字硬件的高效储层计算
IEEE Trans Neural Netw Learn Syst. 2022 Apr;33(4):1688-1701. doi: 10.1109/TNNLS.2020.3043309. Epub 2022 Apr 4.
6
Resonator Networks, 1: An Efficient Solution for Factoring High-Dimensional, Distributed Representations of Data Structures.谐振器网络,1:一种高效的解决方案,用于对数据结构的高维分布式表示进行分解。
Neural Comput. 2020 Dec;32(12):2311-2331. doi: 10.1162/neco_a_01331. Epub 2020 Oct 20.
7
Resonator Networks, 2: Factorization Performance and Capacity Compared to Optimization-Based Methods.谐振器网络,2:与基于优化的方法相比的分解性能和容量。
Neural Comput. 2020 Dec;32(12):2332-2388. doi: 10.1162/neco_a_01329. Epub 2020 Oct 20.
8
Density Encoding Enables Resource-Efficient Randomly Connected Neural Networks.密度编码使资源高效的随机连接神经网络成为可能。
IEEE Trans Neural Netw Learn Syst. 2021 Aug;32(8):3777-3783. doi: 10.1109/TNNLS.2020.3015971. Epub 2021 Aug 3.
9
A Theory of Sequence Indexing and Working Memory in Recurrent Neural Networks.循环神经网络中序列索引与工作记忆的理论
Neural Comput. 2018 Jun;30(6):1449-1513. doi: 10.1162/neco_a_01084. Epub 2018 Apr 13.
10
Neural Variability and Sampling-Based Probabilistic Representations in the Visual Cortex.视觉皮层中的神经变异性和基于采样的概率表征
Neuron. 2016 Oct 19;92(2):530-543. doi: 10.1016/j.neuron.2016.09.038.