Suppr超能文献

具有神经可塑性的深度液态机器用于视频活动识别

Deep Liquid State Machines With Neural Plasticity for Video Activity Recognition.

作者信息

Soures Nicholas, Kudithipudi Dhireesha

机构信息

Neuromorphic AI Laboratory, Rochester Institute of Technology, Rochester, NY, United States.

出版信息

Front Neurosci. 2019 Jul 4;13:686. doi: 10.3389/fnins.2019.00686. eCollection 2019.

Abstract

Real-world applications such as first-person video activity recognition require intelligent edge devices. However, size, weight, and power constraints of the embedded platforms cannot support resource intensive state-of-the-art algorithms. Machine learning lite algorithms, such as reservoir computing, with shallow 3-layer networks are computationally frugal as only the output layer is trained. By reducing network depth and plasticity, reservoir computing minimizes computational power and complexity, making the algorithms optimal for edge devices. However, as a trade-off for their frugal nature, reservoir computing sacrifices computational power compared to state-of-the-art methods. A good compromise between reservoir computing and fully supervised networks are the proposed deep-LSM networks. The deep-LSM is a deep spiking neural network which captures dynamic information over multiple time-scales with a combination of randomly connected layers and unsupervised layers. The deep-LSM processes the captured dynamic information through an attention modulated readout layer to perform classification. We demonstrate that the deep-LSM achieves an average of 84.78% accuracy on the DogCentric video activity recognition task, beating state-of-the-art. The deep-LSM also shows up to 91.13% memory savings and up to 91.55% reduction in synaptic operations when compared to similar recurrent neural network models. Based on these results we claim that the deep-LSM is capable of overcoming limitations of traditional reservoir computing, while maintaining the low computational cost associated with reservoir computing.

摘要

诸如第一人称视频活动识别等实际应用需要智能边缘设备。然而,嵌入式平台的尺寸、重量和功率限制无法支持资源密集型的先进算法。机器学习轻量级算法,如具有浅三层网络的储层计算,计算量较小,因为只训练输出层。通过减少网络深度和可塑性,储层计算将计算能力和复杂度降至最低,使这些算法成为边缘设备的理想选择。然而,作为其节俭特性的一种权衡,与先进方法相比,储层计算牺牲了计算能力。储层计算和全监督网络之间的一个良好折衷方案是所提出的深度LSM网络。深度LSM是一种深度脉冲神经网络,它通过随机连接层和无监督层的组合在多个时间尺度上捕获动态信息。深度LSM通过注意力调制读出层处理捕获的动态信息以进行分类。我们证明,深度LSM在以狗为中心的视频活动识别任务上平均准确率达到84.78%,超过了现有技术。与类似的递归神经网络模型相比,深度LSM还显示出高达91.13%的内存节省和高达91.55%的突触操作减少。基于这些结果,我们声称深度LSM能够克服传统储层计算的局限性,同时保持与储层计算相关的低计算成本。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/f6c5/6621912/607a4792ddc1/fnins-13-00686-g0001.jpg

文献AI研究员

20分钟写一篇综述,助力文献阅读效率提升50倍。

立即体验

用中文搜PubMed

大模型驱动的PubMed中文搜索引擎

马上搜索

文档翻译

学术文献翻译模型,支持多种主流文档格式。

立即体验