Lutes Nathan A, Sriram Siddhardh Nadendla Venkata, Krishnamurthy K
Department of Mechanical and Aerospace Engineering, Missouri University of Science and Technology, 400 W. 13th Street, Rolla, MO 65409, United States of America.
Department of Computer Science, Missouri University of Science and Technology, 500 W. 15th Street, Rolla, MO 65409, United States of America.
J Neural Eng. 2025 Feb 13;22(1). doi: 10.1088/1741-2552/adb079.
This work explores use of a few-shot transfer learning method to train and implement a convolutional spiking neural network (CSNN) on a BrainChip Akida AKD1000 neuromorphic system-on-chip for developing individual-level, instead of traditionally used group-level, models using electroencephalographic data. The efficacy of the method is studied on an advanced driver assist system related task of predicting braking intention.Data are collected from participants operating an NVIDIA JetBot on a testbed simulating urban streets for three different scenarios. Participants receive a braking indicator in the form of: (1) an audio countdown in a nominal baseline, stress-free environment; (2) an audio countdown in an environment with added elements of physical fatigue and active cognitive distraction; (3) a visual cue given through stoplights in a stress-free environment. These datasets are then used to develop individual-level models from group-level models using a few-shot transfer learning method, which involves: (1) creating a group-level model by training a CNN on group-level data followed by quantization and recouping any performance loss using quantization-aware retraining; (2) converting the CNN to be compatible with Akida AKD1000 processor; and (3) training the final decision layer on individual-level data subsets to create individual-customized models using an online Akida edge-learning algorithm.Efficacy of the above methodology to develop individual-specific braking intention predictive models by rapidly adapting the group-level model in as few as three training epochs while achieving at least 90% accuracy, true positive rate and true negative rate is presented. Further, results show the energy-efficiency of the neuromorphic hardware through a power reduction of over 97% with only a 1.3 × increase in latency when using the Akida AKD1000 processor for network inference compared to an Intel Xeon central processing unit. Similar results were obtained in a subsequent ablation study using a subset of five out of 19 channels.Especially relevant to real-time applications, this work presents an energy-efficient, few-shot transfer learning method that is implemented on a neuromorphic processor capable of training a CSNN as new data becomes available, operating conditions change, or to customize group-level models to yield personalized models unique to each individual.
这项工作探索了使用少样本迁移学习方法,在BrainChip Akida AKD1000神经形态片上系统上训练并实现卷积脉冲神经网络(CSNN),以利用脑电图数据开发个体级模型,而非传统使用的组级模型。该方法的有效性在与高级驾驶辅助系统相关的预测制动意图任务上进行了研究。数据是在模拟城市街道的测试台上,从操作NVIDIA JetBot的参与者那里收集的,共三种不同场景。参与者会收到以下形式的制动指示:(1)在正常基线、无压力环境下的音频倒计时;(2)在添加了身体疲劳和主动认知干扰元素的环境中的音频倒计时;(3)在无压力环境下通过红绿灯给出的视觉提示。然后,使用少样本迁移学习方法,从组级模型开发个体级模型,该方法包括:(1)通过在组级数据上训练卷积神经网络(CNN),接着进行量化,并使用量化感知再训练弥补任何性能损失,从而创建组级模型;(2)将CNN转换为与Akida AKD1000处理器兼容;(3)使用在线Akida边缘学习算法,在个体级数据子集上训练最终决策层,以创建个体定制模型。展示了上述方法在仅三个训练周期内快速调整组级模型,同时实现至少90%的准确率、真阳性率和真阴性率,从而开发个体特定制动意图预测模型的有效性。此外,结果表明,与英特尔至强中央处理器相比,在使用Akida AKD1000处理器进行网络推理时,神经形态硬件的能源效率提高了97%以上,而延迟仅增加了1.3倍。在随后使用19个通道中的5个通道子集进行的消融研究中也获得了类似结果。这项工作提出了一种高效节能的少样本迁移学习方法,特别适用于实时应用,该方法在神经形态处理器上实现,能够在新数据可用、操作条件变化时训练CSNN,或者定制组级模型以生成每个个体独有的个性化模型。