Ge Jiaqi, Xu Gaochao, Lu Jianchao, Xu Xu, Li Long, Meng Xiangyu
Department of Computer Science and Technology, Jilin University, Changchun 130012, China.
School of Computing, Macquarie University, Sydney, NSW 2109, Australia.
Sensors (Basel). 2024 May 21;24(11):3274. doi: 10.3390/s24113274.
This work develops a generalizable neural network, SENSORNET, for sensor feature learning across various applications. The primary challenge addressed is the poor portability of pretrained neural networks to new applications with limited sensor data. To solve this challenge, we design SensorNet, which integrates the flexibility of self-attention with multi-scale feature locality of convolution. Moreover, we invent patch-wise self-attention with stacked multi-heads to enrich the sensor feature representation. SensorNet is generalizable to pervasive applications with any number of sensor inputs, and is much smaller than the state-of-the-art self-attention and convolution hybrid baseline (0.83 M vs. 3.87 M parameters) with similar performance. The experimental results show that SensorNet is able to achieve state-of-the-art performance compared with the top five models on a competition activity recognition dataset (SHL'18). Moreover, pretrained SensorNet in a large inertial measurement unit (IMU) dataset can be fine-tuned to achieve the best accuracy on a much smaller IMU dataset (up to 5% improvement in WISDM) and to achieve the state-of-the-art performance on an EEG dataset (SLEEP-EDF-20), showing the strong generalizability of our approach.
这项工作开发了一种通用的神经网络SENSORNET,用于跨各种应用的传感器特征学习。所解决的主要挑战是预训练神经网络在传感器数据有限的新应用中的可移植性较差。为了解决这一挑战,我们设计了SensorNet,它将自注意力的灵活性与卷积的多尺度特征局部性相结合。此外,我们发明了具有堆叠多头的逐块自注意力,以丰富传感器特征表示。SensorNet可推广到具有任意数量传感器输入的普及应用中,并且在性能相似的情况下,比最先进的自注意力和卷积混合基线小得多(参数为0.83M对3.87M)。实验结果表明,与竞争活动识别数据集(SHL'18)上的前五个模型相比,SensorNet能够实现最先进的性能。此外,在大型惯性测量单元(IMU)数据集中预训练的SensorNet可以进行微调,以在小得多的IMU数据集上实现最佳精度(在WISDM中提高高达5%),并在脑电图数据集(SLEEP-EDF-20)上实现最先进的性能,这表明了我们方法的强大通用性。