Department of Information and Communication Engineering, School of Informatics, Xiamen University, Xiamen, China.
Strategic Centre for Research in Privacy-Preserving Technologies and Systems, Nanyang Technological University, Singapore, Singapore.
Nat Commun. 2022 Jul 25;13(1):4269. doi: 10.1038/s41467-022-32020-w.
In order to realize the full potential of wireless edge artificial intelligence (AI), very large and diverse datasets will often be required for energy-demanding model training on resource-constrained edge devices. This paper proposes a lead federated neuromorphic learning (LFNL) technique, which is a decentralized energy-efficient brain-inspired computing method based on spiking neural networks. The proposed technique will enable edge devices to exploit brain-like biophysiological structure to collaboratively train a global model while helping preserve privacy. Experimental results show that, under the situation of uneven dataset distribution among edge devices, LFNL achieves a comparable recognition accuracy to existing edge AI techniques, while substantially reducing data traffic by >3.5× and computational latency by >2.0×. Furthermore, LFNL significantly reduces energy consumption by >4.5× compared to standard federated learning with a slight accuracy loss up to 1.5%. Therefore, the proposed LFNL can facilitate the development of brain-inspired computing and edge AI.
为了充分发挥无线边缘人工智能(AI)的潜力,在资源受限的边缘设备上进行能源密集型模型训练通常需要非常大和多样化的数据集。本文提出了一种领先的联邦神经拟态学习(LFNL)技术,这是一种基于尖峰神经网络的去中心化节能脑启发式计算方法。所提出的技术将使边缘设备能够利用类似大脑的生物生理结构来协作训练全局模型,同时帮助保护隐私。实验结果表明,在边缘设备之间数据集分布不均匀的情况下,LFNL 实现了与现有边缘 AI 技术相当的识别精度,同时通过 >3.5×的数据流量和 >2.0×的计算延迟得到大幅减少。此外,与标准联邦学习相比,LFNL 可将能耗降低 >4.5×,而准确率仅损失高达 1.5%。因此,所提出的 LFNL 可以促进脑启发式计算和边缘 AI 的发展。