• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

边缘计算中深度神经网络的随机草图学习

Random sketch learning for deep neural networks in edge computing.

作者信息

Li Bin, Chen Peijun, Liu Hongfu, Guo Weisi, Cao Xianbin, Du Junzhao, Zhao Chenglin, Zhang Jun

机构信息

School of Information and Communication Engineering, Beijing University of Posts and Telecommunications, Beijing, China.

School of Information and Electronics, Beijing Institute of Technology, Beijing, China.

出版信息

Nat Comput Sci. 2021 Mar;1(3):221-228. doi: 10.1038/s43588-021-00039-6. Epub 2021 Mar 25.

DOI:10.1038/s43588-021-00039-6
PMID:38183196
Abstract

Despite the great potential of deep neural networks (DNNs), they require massive weights and huge computational resources, creating a vast gap when deploying artificial intelligence at low-cost edge devices. Current lightweight DNNs, achieved by high-dimensional space pre-training and post-compression, present challenges when covering the resources deficit, making tiny artificial intelligence hard to be implemented. Here we report an architecture named random sketch learning, or Rosler, for computationally efficient tiny artificial intelligence. We build a universal compressing-while-training framework that directly learns a compact model and, most importantly, enables computationally efficient on-device learning. As validated on different models and datasets, it attains substantial memory reduction of ~50-90× (16-bits quantization), compared with fully connected DNNs. We demonstrate it on low-cost hardware, whereby the computation is accelerated by >180× and the energy consumption is reduced by ~10×. Our method paves the way for deploying tiny artificial intelligence in many scientific and industrial applications.

摘要

尽管深度神经网络(DNN)具有巨大潜力,但它们需要大量权重和巨大的计算资源,这在将人工智能部署到低成本边缘设备时造成了巨大差距。当前通过高维空间预训练和后压缩实现的轻量级DNN,在弥补资源不足方面面临挑战,使得微型人工智能难以实现。在此,我们报告一种名为随机草图学习(Rosler)的架构,用于实现计算高效的微型人工智能。我们构建了一个通用的训练时压缩框架,该框架直接学习紧凑模型,最重要的是,能够实现计算高效的设备端学习。在不同模型和数据集上的验证表明,与全连接DNN相比,它可实现约50 - 90倍的显著内存减少(16位量化)。我们在低成本硬件上进行了演示,计算速度加快了180倍以上,能耗降低了约10倍。我们的方法为在许多科学和工业应用中部署微型人工智能铺平了道路。

相似文献

1
Random sketch learning for deep neural networks in edge computing.边缘计算中深度神经网络的随机草图学习
Nat Comput Sci. 2021 Mar;1(3):221-228. doi: 10.1038/s43588-021-00039-6. Epub 2021 Mar 25.
2
Memristors for Neuromorphic Circuits and Artificial Intelligence Applications.用于神经形态电路和人工智能应用的忆阻器
Materials (Basel). 2020 Feb 20;13(4):938. doi: 10.3390/ma13040938.
3
From Algorithm to Hardware: A Survey on Efficient and Safe Deployment of Deep Neural Networks.从算法到硬件:深度神经网络高效安全部署综述
IEEE Trans Neural Netw Learn Syst. 2025 Apr;36(4):5837-5857. doi: 10.1109/TNNLS.2024.3394494. Epub 2025 Apr 4.
4
Efficient Resource-Aware Convolutional Neural Architecture Search for Edge Computing with Pareto-Bayesian Optimization.基于 Pareto-Bayesian 优化的高效资源感知卷积神经网络架构搜索在边缘计算中的应用。
Sensors (Basel). 2021 Jan 10;21(2):444. doi: 10.3390/s21020444.
5
GXNOR-Net: Training deep neural networks with ternary weights and activations without full-precision memory under a unified discretization framework.GXNOR-Net:在统一的离散化框架下使用三进制权重和激活函数训练深度神经网络,无需全精度存储。
Neural Netw. 2018 Apr;100:49-58. doi: 10.1016/j.neunet.2018.01.010. Epub 2018 Feb 2.
6
Compressing Deep Networks by Neuron Agglomerative Clustering.通过神经元聚合聚类压缩深度网络
Sensors (Basel). 2020 Oct 23;20(21):6033. doi: 10.3390/s20216033.
7
Symbolic Deep Networks: A Psychologically Inspired Lightweight and Efficient Approach to Deep Learning.符号深度学习网络:一种受心理学启发的轻量级高效深度学习方法。
Top Cogn Sci. 2022 Oct;14(4):702-717. doi: 10.1111/tops.12571. Epub 2021 Oct 5.
8
An in-memory computing architecture based on a duplex two-dimensional material structure for in situ machine learning.基于二维双材料结构的内存计算架构,用于现场机器学习。
Nat Nanotechnol. 2023 May;18(5):493-500. doi: 10.1038/s41565-023-01343-0. Epub 2023 Mar 20.
9
Neuromorphic Sentiment Analysis Using Spiking Neural Networks.基于尖峰神经网络的神经形态情绪分析。
Sensors (Basel). 2023 Sep 6;23(18):7701. doi: 10.3390/s23187701.
10
Synapse-Mimetic Hardware-Implemented Resistive Random-Access Memory for Artificial Neural Network.用于人工神经网络的突触模拟硬件实现的电阻式随机存取存储器。
Sensors (Basel). 2023 Mar 14;23(6):3118. doi: 10.3390/s23063118.

引用本文的文献

1
Near-Sensor Edge Computing System Enabled by a CMOS Compatible Photonic Integrated Circuit Platform Using Bilayer AlN/Si Waveguides.基于双层氮化铝/硅波导的互补金属氧化物半导体兼容光子集成电路平台实现的近传感器边缘计算系统
Nanomicro Lett. 2025 May 19;17(1):261. doi: 10.1007/s40820-025-01743-y.
2
Orbital learning: a novel, actively orchestrated decentralised learning for healthcare.轨道学习:一种新颖的、主动协调的去中心化医疗学习方法。
Sci Rep. 2024 May 7;14(1):10459. doi: 10.1038/s41598-024-60915-9.
3
Lead federated neuromorphic learning for wireless edge artificial intelligence.
领导联邦神经形态学习以实现无线边缘人工智能。
Nat Commun. 2022 Jul 25;13(1):4269. doi: 10.1038/s41467-022-32020-w.
4
Ready, Steady, Go AI: A practical tutorial on fundamentals of artificial intelligence and its applications in phenomics image analysis.准备,就绪,出发!人工智能:人工智能基础及其在表型组学图像分析中的应用实用教程。
Patterns (N Y). 2021 Sep 10;2(9):100323. doi: 10.1016/j.patter.2021.100323.