Zhang Dandan, Zhang Zhiqiang, Chen Nanguang, Wang Yun
School of Computer Science and Engineering, Southeast University, Nanjing, China.
College of Design and Engineering, National University of Singapore, Singapore.
Neural Netw. 2025 Jan;181:106800. doi: 10.1016/j.neunet.2024.106800. Epub 2024 Oct 23.
Multivariate time series exhibit complex patterns and structures involving interactions among multiple variables and long-term temporal dependencies, making multivariate long sequence time series forecasting (MLSTF) exceptionally challenging. Despite significant progress in Transformer-based methods in the MLSTF domain, many models still rely on stacked encoder-decoder architectures to capture complex time series patterns. This leads to increased computational complexity and overlooks spatial pattern information in multivariate time series, thereby limiting the model's performance. To address these challenges, we propose RFNet, a lightweight model based on recurrent representation and feature enhancement. We partition the time series into fixed-size subsequences to retain local contextual temporal pattern information and cross-variable spatial pattern information. The recurrent representation module employs gate attention mechanisms and memory units to capture local information of the subsequences and obtain long-term correlation information of the input sequence by integrating information from different memory units. Meanwhile, we utilize a shared multi-layer perceptron (MLP) to capture global pattern information of the input sequence. The feature enhancement module explicitly extracts complex spatial patterns in the time series by transforming the input sequence. We validate the performance of RFNet on ten real-world datasets. The results demonstrate an improvement of approximately 55.3% over state-of-the-art MLSTF models, highlighting its significant advantage in addressing multivariate long sequence time series forecasting problems.
多变量时间序列呈现出复杂的模式和结构,涉及多个变量之间的相互作用以及长期的时间依赖性,这使得多变量长序列时间序列预测(MLSTF)极具挑战性。尽管基于Transformer的方法在MLSTF领域取得了显著进展,但许多模型仍依赖堆叠的编码器-解码器架构来捕捉复杂的时间序列模式。这导致计算复杂度增加,并忽略了多变量时间序列中的空间模式信息,从而限制了模型的性能。为应对这些挑战,我们提出了RFNet,一种基于循环表示和特征增强的轻量级模型。我们将时间序列划分为固定大小的子序列,以保留局部上下文时间模式信息和跨变量空间模式信息。循环表示模块采用门控注意力机制和记忆单元来捕捉子序列的局部信息,并通过整合来自不同记忆单元的信息获得输入序列的长期相关信息。同时,我们利用共享的多层感知器(MLP)来捕捉输入序列的全局模式信息。特征增强模块通过变换输入序列来显式提取时间序列中的复杂空间模式。我们在十个真实世界数据集上验证了RFNet的性能。结果表明,与现有最先进的MLSTF模型相比,性能提高了约55.3%,突出了其在解决多变量长序列时间序列预测问题方面的显著优势。