Jiaxing Sun, Yanhui Li, Yuying Zhao
Department of Bohai Rim Energy Research Institute, Northeast Petroleum University, No550,West Section ofHebei Street, Qinhuangdao, 066004, Hebei province, China.
Electrical and Information Engineering, Northeast Petroleum University, Xuefu Street 99, Daqing, 163318, Heilongjiang province, China.
Sci Rep. 2025 Aug 7;15(1):28904. doi: 10.1038/s41598-025-13680-2.
Although Transformers perform well in time series prediction, they struggle when dealing with real-world data where the joint distribution changes over time. Previous studies have focused on reducing the non-stationarity of sequences through smoothing, but this approach strips the sequences of their inherent non-stationarity, which may lack predictive guidance for sudden events in the real world. To address the contradiction between sequence predictability and model capability, this paper proposes an efficient model design for multivariate non-stationary time series based on Transformers. This design is based on two core components: (1)Low-cost non-stationary attention mechanism, which restores intrinsic non-stationary information to time-dependent relationships at a lower computational cost by approximating the distinguishable attention learned in the original sequence.; (2) dual-data-stream Progressively learning, which designs an auxiliary output stream to improve information aggregation mechanisms, enabling the model to learn residuals of supervised signals layer by layer.The proposed model outperforms the mainstream Tranformer with an average improvement of 5.3% on multiple datasets, which provides theoretical support for the analysis of non-stationary engineering data.
尽管Transformer在时间序列预测方面表现出色,但在处理联合分布随时间变化的现实世界数据时却面临困难。以往的研究主要集中在通过平滑来降低序列的非平稳性,但这种方法去除了序列固有的非平稳性,可能缺乏对现实世界中突发事件的预测指导。为了解决序列可预测性与模型能力之间的矛盾,本文提出了一种基于Transformer的多变量非平稳时间序列高效模型设计。该设计基于两个核心组件:(1)低成本非平稳注意力机制,通过近似原始序列中学习到的可区分注意力,以较低的计算成本将内在非平稳信息恢复到时间依赖关系中;(2)双数据流渐进学习,设计一个辅助输出流来改进信息聚合机制,使模型能够逐层学习监督信号的残差。所提出的模型在多个数据集上的平均改进率为5.3%,优于主流Transformer,为非平稳工程数据的分析提供了理论支持。