• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

基于深度 LSTM 的堆叠自动编码器的无监督预训练用于多元时间序列预测问题。

Unsupervised Pre-training of a Deep LSTM-based Stacked Autoencoder for Multivariate Time Series Forecasting Problems.

机构信息

College of Computer Science and Information Technology, King Faisal University, Al-Ahsa, 31982, Saudi Arabia.

Center for Artificial Intelligence and RObotics (CAIRO), Faculty of Science, Aswan University, Aswan, 81528, Egypt.

出版信息

Sci Rep. 2019 Dec 13;9(1):19038. doi: 10.1038/s41598-019-55320-6.

DOI:10.1038/s41598-019-55320-6
PMID:31836728
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC6911101/
Abstract

Currently, most real-world time series datasets are multivariate and are rich in dynamical information of the underlying system. Such datasets are attracting much attention; therefore, the need for accurate modelling of such high-dimensional datasets is increasing. Recently, the deep architecture of the recurrent neural network (RNN) and its variant long short-term memory (LSTM) have been proven to be more accurate than traditional statistical methods in modelling time series data. Despite the reported advantages of the deep LSTM model, its performance in modelling multivariate time series (MTS) data has not been satisfactory, particularly when attempting to process highly non-linear and long-interval MTS datasets. The reason is that the supervised learning approach initializes the neurons randomly in such recurrent networks, disabling the neurons that ultimately must properly learn the latent features of the correlated variables included in the MTS dataset. In this paper, we propose a pre-trained LSTM-based stacked autoencoder (LSTM-SAE) approach in an unsupervised learning fashion to replace the random weight initialization strategy adopted in deep LSTM recurrent networks. For evaluation purposes, two different case studies that include real-world datasets are investigated, where the performance of the proposed approach compares favourably with the deep LSTM approach. In addition, the proposed approach outperforms several reference models investigating the same case studies. Overall, the experimental results clearly show that the unsupervised pre-training approach improves the performance of deep LSTM and leads to better and faster convergence than other models.

摘要

目前,大多数真实世界的时间序列数据集都是多元的,并且包含了底层系统的丰富动态信息。这些数据集引起了广泛关注,因此,对这种高维数据集进行准确建模的需求正在增加。最近,递归神经网络(RNN)的深度架构及其变体长短期记忆(LSTM)已被证明在对时间序列数据进行建模方面比传统统计方法更准确。尽管深度 LSTM 模型具有报道的优势,但它在对多元时间序列(MTS)数据进行建模方面的性能并不令人满意,特别是在尝试处理高度非线性和长间隔 MTS 数据集时。原因是监督学习方法在这种递归网络中随机初始化神经元,从而使神经元无法最终正确学习 MTS 数据集中包含的相关变量的潜在特征。在本文中,我们提出了一种基于预训练 LSTM 的堆叠自动编码器(LSTM-SAE)方法,以无监督学习的方式替代深度 LSTM 递归网络中采用的随机权重初始化策略。为了评估目的,研究了两个包含真实数据集的不同案例研究,其中所提出方法的性能与深度 LSTM 方法相比具有优势。此外,所提出的方法优于调查相同案例研究的几个参考模型。总体而言,实验结果清楚地表明,无监督预训练方法可以提高深度 LSTM 的性能,并导致比其他模型更快更好的收敛。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/15e0/6911101/05c121c6bc14/41598_2019_55320_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/15e0/6911101/93b831a9564f/41598_2019_55320_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/15e0/6911101/d43229eff456/41598_2019_55320_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/15e0/6911101/d30fe53e92e3/41598_2019_55320_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/15e0/6911101/d04cbad3b455/41598_2019_55320_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/15e0/6911101/0b43869bfa8e/41598_2019_55320_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/15e0/6911101/a98ddf5b23bd/41598_2019_55320_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/15e0/6911101/e3e1ac6ace01/41598_2019_55320_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/15e0/6911101/4d6cf958362b/41598_2019_55320_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/15e0/6911101/05c121c6bc14/41598_2019_55320_Fig9_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/15e0/6911101/93b831a9564f/41598_2019_55320_Fig1_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/15e0/6911101/d43229eff456/41598_2019_55320_Fig2_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/15e0/6911101/d30fe53e92e3/41598_2019_55320_Fig3_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/15e0/6911101/d04cbad3b455/41598_2019_55320_Fig4_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/15e0/6911101/0b43869bfa8e/41598_2019_55320_Fig5_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/15e0/6911101/a98ddf5b23bd/41598_2019_55320_Fig6_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/15e0/6911101/e3e1ac6ace01/41598_2019_55320_Fig7_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/15e0/6911101/4d6cf958362b/41598_2019_55320_Fig8_HTML.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/15e0/6911101/05c121c6bc14/41598_2019_55320_Fig9_HTML.jpg

相似文献

1
Unsupervised Pre-training of a Deep LSTM-based Stacked Autoencoder for Multivariate Time Series Forecasting Problems.基于深度 LSTM 的堆叠自动编码器的无监督预训练用于多元时间序列预测问题。
Sci Rep. 2019 Dec 13;9(1):19038. doi: 10.1038/s41598-019-55320-6.
2
Developing a multivariate time series forecasting framework based on stacked autoencoders and multi-phase feature.基于堆叠自编码器和多阶段特征开发多元时间序列预测框架。
Heliyon. 2024 Mar 19;10(7):e27860. doi: 10.1016/j.heliyon.2024.e27860. eCollection 2024 Apr 15.
3
DGSLSTM: Deep Gated Stacked Long Short-Term Memory Neural Network for Traffic Flow Forecasting of Transportation Networks on Big Data Environment.DGSLSTM:用于大数据环境下交通网络交通流预测的深度门控堆叠长短期记忆神经网络
Big Data. 2024 Dec;12(6):504-517. doi: 10.1089/big.2021.0013. Epub 2022 Feb 10.
4
Deep belief improved bidirectional LSTM for multivariate time series forecasting.用于多变量时间序列预测的深度信念改进双向长短期记忆网络
Math Biosci Eng. 2023 Aug 17;20(9):16596-16627. doi: 10.3934/mbe.2023739.
5
A deep LSTM autoencoder-based framework for predictive maintenance of a proton radiotherapy delivery system.基于深度 LSTM 自动编码器的质子放射治疗系统预测性维护框架。
Artif Intell Med. 2022 Oct;132:102387. doi: 10.1016/j.artmed.2022.102387. Epub 2022 Aug 30.
6
DAFA-BiLSTM: Deep Autoregression Feature Augmented Bidirectional LSTM network for time series prediction.DAFA-BiLSTM:用于时间序列预测的深度自回归特征增强双向 LSTM 网络。
Neural Netw. 2023 Jan;157:240-256. doi: 10.1016/j.neunet.2022.10.009. Epub 2022 Oct 14.
7
Time series forecasting of Covid-19 using deep learning models: India-USA comparative case study.使用深度学习模型对新冠疫情进行时间序列预测:印度与美国的对比案例研究。
Chaos Solitons Fractals. 2020 Nov;140:110227. doi: 10.1016/j.chaos.2020.110227. Epub 2020 Aug 20.
8
Short-term wind power forecasting through stacked and bi directional LSTM techniques.通过堆叠式和双向长短期记忆网络技术进行短期风电功率预测。
PeerJ Comput Sci. 2024 Mar 29;10:e1949. doi: 10.7717/peerj-cs.1949. eCollection 2024.
9
Recurrent transform learning.反复变换学习。
Neural Netw. 2019 Oct;118:271-279. doi: 10.1016/j.neunet.2019.07.003. Epub 2019 Jul 15.
10
Novel Efficient RNN and LSTM-Like Architectures: Recurrent and Gated Broad Learning Systems and Their Applications for Text Classification.新型高效 RNN 和类似 LSTM 的架构:递归和门控广义学习系统及其在文本分类中的应用。
IEEE Trans Cybern. 2021 Mar;51(3):1586-1597. doi: 10.1109/TCYB.2020.2969705. Epub 2021 Feb 17.

引用本文的文献

1
Enhancing cybersecurity in virtual power plants by detecting network based cyber attacks using an unsupervised autoencoder approach.通过使用无监督自动编码器方法检测基于网络的网络攻击来增强虚拟发电厂的网络安全。
Sci Rep. 2025 Sep 5;15(1):32374. doi: 10.1038/s41598-025-01863-w.
2
Analyzing crises in global financial indices using Recurrent Neural Network based Autoencoder.使用基于递归神经网络的自动编码器分析全球金融指数中的危机。
PLoS One. 2025 Jul 14;20(7):e0326947. doi: 10.1371/journal.pone.0326947. eCollection 2025.
3
Time-Series Representation Feature Refinement with a Learnable Masking Augmentation Framework in Contrastive Learning.

本文引用的文献

1
A hybrid model for spatiotemporal forecasting of PM based on graph convolutional neural network and long short-term memory.基于图卷积神经网络和长短时记忆的 PM 时空预测混合模型。
Sci Total Environ. 2019 May 10;664:1-10. doi: 10.1016/j.scitotenv.2019.01.333. Epub 2019 Feb 1.
2
A Deep CNN-LSTM Model for Particulate Matter (PM) Forecasting in Smart Cities.基于深度学习的城市细颗粒物预测模型。
Sensors (Basel). 2018 Jul 10;18(7):2220. doi: 10.3390/s18072220.
3
A deep learning framework for financial time series using stacked autoencoders and long-short term memory.
基于对比学习中可学习掩码增强框架的时间序列表示特征细化
Sensors (Basel). 2024 Dec 11;24(24):7932. doi: 10.3390/s24247932.
4
Enhancing human computer interaction with coot optimization and deep learning for multi language identification.利用白鹭优化算法和深度学习增强人机交互的多语言识别。
Sci Rep. 2024 Oct 3;14(1):22963. doi: 10.1038/s41598-024-74327-2.
5
Optimizing flood predictions by integrating LSTM and physical-based models with mixed historical and simulated data.通过整合长短期记忆网络(LSTM)和基于物理的模型以及混合历史数据和模拟数据来优化洪水预测。
Heliyon. 2024 Jun 26;10(13):e33669. doi: 10.1016/j.heliyon.2024.e33669. eCollection 2024 Jul 15.
6
Autoencoder to Identify Sex-Specific Sub-phenotypes in Alzheimer's Disease Progression Using Longitudinal Electronic Health Records.使用纵向电子健康记录的自动编码器识别阿尔茨海默病进展中的性别特异性亚表型。
medRxiv. 2024 Jul 11:2024.07.07.24310055. doi: 10.1101/2024.07.07.24310055.
7
Condition Monitoring and Predictive Maintenance of Assets in Manufacturing Using LSTM-Autoencoders and Transformer Encoders.基于长短期记忆自动编码器和变压器编码器的制造资产状态监测与预测性维护
Sensors (Basel). 2024 May 18;24(10):3215. doi: 10.3390/s24103215.
8
LSTM-Autoencoder Based Anomaly Detection Using Vibration Data of Wind Turbines.基于长短期记忆自动编码器的风力涡轮机振动数据异常检测
Sensors (Basel). 2024 Apr 29;24(9):2833. doi: 10.3390/s24092833.
9
Developing a multivariate time series forecasting framework based on stacked autoencoders and multi-phase feature.基于堆叠自编码器和多阶段特征开发多元时间序列预测框架。
Heliyon. 2024 Mar 19;10(7):e27860. doi: 10.1016/j.heliyon.2024.e27860. eCollection 2024 Apr 15.
10
Enhanced SARS-CoV-2 case prediction using public health data and machine learning models.利用公共卫生数据和机器学习模型增强对2019冠状病毒病病例的预测
JAMIA Open. 2024 Feb 10;7(1):ooae014. doi: 10.1093/jamiaopen/ooae014. eCollection 2024 Apr.
一种使用堆叠自编码器和长短时记忆的金融时间序列深度学习框架。
PLoS One. 2017 Jul 14;12(7):e0180944. doi: 10.1371/journal.pone.0180944. eCollection 2017.
4
LSTM: A Search Space Odyssey.长短期记忆网络:搜索空间奥德赛。
IEEE Trans Neural Netw Learn Syst. 2017 Oct;28(10):2222-2232. doi: 10.1109/TNNLS.2016.2582924. Epub 2016 Jul 8.
5
Deep learning in neural networks: an overview.神经网络中的深度学习:综述。
Neural Netw. 2015 Jan;61:85-117. doi: 10.1016/j.neunet.2014.09.003. Epub 2014 Oct 13.
6
Solving the linear interval tolerance problem for weight initialization of neural networks.解决神经网络权重初始化的线性区间容差问题。
Neural Netw. 2014 Jun;54:17-37. doi: 10.1016/j.neunet.2014.02.006. Epub 2014 Feb 24.
7
Learning long-term dependencies with gradient descent is difficult.使用梯度下降法学习长期依赖关系是困难的。
IEEE Trans Neural Netw. 1994;5(2):157-66. doi: 10.1109/72.279181.
8
Reducing the dimensionality of data with neural networks.使用神经网络降低数据维度。
Science. 2006 Jul 28;313(5786):504-7. doi: 10.1126/science.1127647.
9
A fast learning algorithm for deep belief nets.一种用于深度信念网络的快速学习算法。
Neural Comput. 2006 Jul;18(7):1527-54. doi: 10.1162/neco.2006.18.7.1527.
10
Learning to forget: continual prediction with LSTM.学习遗忘:使用长短期记忆网络进行持续预测。
Neural Comput. 2000 Oct;12(10):2451-71. doi: 10.1162/089976600300015015.