Naghashi Vahid, Boukadoum Mounir, Diallo Abdoulaye Banire
Computer Science, Université du Québec à Montréal, Montreal, Canada.
Sci Rep. 2025 Jan 10;15(1):1565. doi: 10.1038/s41598-024-82417-4.
Transformer based models for time-series forecasting have shown promising performance and during the past few years different Transformer variants have been proposed in time-series forecasting domain. However, most of the existing methods, mainly represent the time-series from a single scale, making it challenging to capture various time granularities or ignore inter-series correlations between the series which might lead to inaccurate forecasts. In this paper, we address the above mentioned shortcomings and propose a Transformer based model which integrates multi-scale patch-wise temporal modeling and channel-wise representation. In the multi-scale temporal part, the input time-series is divided into patches of different resolutions to capture temporal correlations associated with various scales. The channel-wise encoder which comes after the temporal encoder, models the relations among the input series to capture the intricate interactions between them. In our framework, we further design a multi-step linear decoder to generate the final predictions for the purpose of reducing over-fitting and noise effects. Extensive experiments on seven real world datasets indicate that our model (MultiPatchFormer) achieves state-of-the-art results by surpassing other current baseline models in terms of error metrics and shows stronger generalizability.
基于Transformer的时间序列预测模型已展现出良好的性能,在过去几年中,时间序列预测领域提出了不同的Transformer变体。然而,现有的大多数方法主要从单一尺度表示时间序列,这使得捕捉各种时间粒度具有挑战性,或者忽略了序列之间的序列间相关性,这可能导致预测不准确。在本文中,我们解决上述缺点,并提出一种基于Transformer的模型,该模型集成了多尺度逐块时间建模和逐通道表示。在多尺度时间部分,将输入时间序列划分为不同分辨率的块,以捕捉与各种尺度相关的时间相关性。时间编码器之后的逐通道编码器对输入序列之间的关系进行建模,以捕捉它们之间复杂的相互作用。在我们的框架中,我们进一步设计了一个多步线性解码器来生成最终预测,以减少过拟合和噪声影响。在七个真实世界数据集上进行的大量实验表明,我们的模型(MultiPatchFormer)在误差指标方面超越了其他当前基线模型,取得了领先的结果,并显示出更强的泛化能力。