Pet Insight Project, Kinship, 1355 Market St #210, San Francisco, CA 94103, USA.
Sensors (Basel). 2020 Apr 28;20(9):2498. doi: 10.3390/s20092498.
In this paper, we present and benchmark FilterNet, a flexible deep learning architecture for time series classification tasks, such as activity recognition via multichannel sensor data. It adapts popular convolutional neural network (CNN) and long short-term memory (LSTM) motifs which have excelled in activity recognition benchmarks, implementing them in a many-to-many architecture to markedly improve frame-by-frame accuracy, event segmentation accuracy, model size, and computational efficiency. We propose several model variants, evaluate them alongside other published models using the Opportunity benchmark dataset, demonstrate the effect of model ensembling and of altering key parameters, and quantify the quality of the models' segmentation of discrete events. We also offer recommendations for use and suggest potential model extensions. FilterNet advances the state of the art in all measured accuracy and speed metrics when applied to the benchmarked dataset, and it can be extensively customized for other applications.
在本文中,我们提出并基准测试了 FilterNet,这是一种用于时间序列分类任务的灵活深度学习架构,例如通过多通道传感器数据进行的活动识别。它采用了在活动识别基准测试中表现出色的流行卷积神经网络 (CNN) 和长短时记忆 (LSTM) 模式,将它们实现为多对多架构,以显著提高逐帧准确性、事件分割准确性、模型大小和计算效率。我们提出了几种模型变体,使用 Opportunity 基准数据集与其他已发布模型一起对它们进行评估,展示了模型集成和更改关键参数的效果,并量化了模型对离散事件分割的质量。我们还为使用提供了建议,并提出了潜在的模型扩展。当应用于基准数据集时,FilterNet 在所有测量的准确性和速度指标上都取得了进展,并且可以针对其他应用进行广泛定制。