• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

历史边缘化改进了变分递归神经网络中的预测。

History Marginalization Improves Forecasting in Variational Recurrent Neural Networks.

作者信息

Qiu Chen, Mandt Stephan, Rudolph Maja

机构信息

Bosch Center for AI, 71272 Renningen, Germany.

Department of Computer Science, TU Kaiserslautern, 67653 Kaiserslautern, Germany.

出版信息

Entropy (Basel). 2021 Nov 24;23(12):1563. doi: 10.3390/e23121563.

DOI:10.3390/e23121563
PMID:34945869
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC8700018/
Abstract

Deep probabilistic time series forecasting models have become an integral part of machine learning. While several powerful generative models have been proposed, we provide evidence that their associated inference models are oftentimes too limited and cause the generative model to predict mode-averaged dynamics. Mode-averaging is problematic since many real-world sequences are highly multi-modal, and their averaged dynamics are unphysical (e.g., predicted taxi trajectories might run through buildings on the street map). To better capture multi-modality, we develop variational dynamic mixtures (VDM): a new variational family to infer sequential latent variables. The VDM approximate posterior at each time step is a mixture density network, whose parameters come from propagating multiple samples through a recurrent architecture. This results in an expressive multi-modal posterior approximation. In an empirical study, we show that VDM outperforms competing approaches on highly multi-modal datasets from different domains.

摘要

深度概率时间序列预测模型已成为机器学习不可或缺的一部分。虽然已经提出了几种强大的生成模型,但我们证明,它们相关的推理模型往往过于有限,导致生成模型预测模式平均动态。模式平均存在问题,因为许多现实世界的序列是高度多模态的,其平均动态是不符合实际的(例如,预测的出租车轨迹可能会穿过街道地图上的建筑物)。为了更好地捕捉多模态,我们开发了变分动态混合模型(VDM):一种用于推断序列潜在变量的新变分族。每个时间步的VDM近似后验是一个混合密度网络,其参数来自通过循环架构传播多个样本。这导致了一个富有表现力的多模态后验近似。在一项实证研究中,我们表明,在来自不同领域的高度多模态数据集上,VDM优于竞争方法。

https://cdn.ncbi.nlm.nih.gov/pmc/blobs/314d/8700018/66889d402836/entropy-23-01563-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/314d/8700018/7a49be943604/entropy-23-01563-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/314d/8700018/51eea49e239b/entropy-23-01563-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/314d/8700018/ce5e6ffe2b50/entropy-23-01563-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/314d/8700018/6fc1f2f02d3a/entropy-23-01563-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/314d/8700018/66889d402836/entropy-23-01563-g005.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/314d/8700018/7a49be943604/entropy-23-01563-g001.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/314d/8700018/51eea49e239b/entropy-23-01563-g002.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/314d/8700018/ce5e6ffe2b50/entropy-23-01563-g003.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/314d/8700018/6fc1f2f02d3a/entropy-23-01563-g004.jpg
https://cdn.ncbi.nlm.nih.gov/pmc/blobs/314d/8700018/66889d402836/entropy-23-01563-g005.jpg

相似文献

1
History Marginalization Improves Forecasting in Variational Recurrent Neural Networks.历史边缘化改进了变分递归神经网络中的预测。
Entropy (Basel). 2021 Nov 24;23(12):1563. doi: 10.3390/e23121563.
2
Sampling the Variational Posterior with Local Refinement.通过局部细化对变分后验进行采样。
Entropy (Basel). 2021 Nov 8;23(11):1475. doi: 10.3390/e23111475.
3
LCBM: A Multi-View Probabilistic Model for Multi-Label Classification.LCBM:一种用于多标签分类的多视图概率模型。
IEEE Trans Pattern Anal Mach Intell. 2021 Aug;43(8):2682-2696. doi: 10.1109/TPAMI.2020.2974203. Epub 2021 Jul 1.
4
Deep active inference.深度主动推理
Biol Cybern. 2018 Dec;112(6):547-573. doi: 10.1007/s00422-018-0785-7. Epub 2018 Oct 22.
5
Novel deep generative simultaneous recurrent model for efficient representation learning.新型深度生成式同时递归模型,用于高效的表示学习。
Neural Netw. 2018 Nov;107:12-22. doi: 10.1016/j.neunet.2018.04.020. Epub 2018 Aug 9.
6
Representation Learning for Dynamic Functional Connectivities via Variational Dynamic Graph Latent Variable Models.基于变分动态图潜在变量模型的动态功能连接性表示学习
Entropy (Basel). 2022 Jan 19;24(2):152. doi: 10.3390/e24020152.
7
Anomaly Detection of Time Series With Smoothness-Inducing Sequential Variational Auto-Encoder.基于平滑诱导序列变分自编码器的时间序列异常检测
IEEE Trans Neural Netw Learn Syst. 2021 Mar;32(3):1177-1191. doi: 10.1109/TNNLS.2020.2980749. Epub 2021 Mar 1.
8
Recommendation via Collaborative Autoregressive Flows.协同自回归流推荐。
Neural Netw. 2020 Jun;126:52-64. doi: 10.1016/j.neunet.2020.03.010. Epub 2020 Mar 13.
9
Goal-Directed Planning for Habituated Agents by Active Inference Using a Variational Recurrent Neural Network.基于变分递归神经网络的主动推理对习惯化智能体进行目标导向规划
Entropy (Basel). 2020 May 18;22(5):564. doi: 10.3390/e22050564.
10
Variational HyperAdam: A Meta-Learning Approach to Network Training.变分超Adam:一种网络训练的元学习方法。
IEEE Trans Pattern Anal Mach Intell. 2022 Aug;44(8):4469-4484. doi: 10.1109/TPAMI.2021.3061581. Epub 2022 Jul 1.

本文引用的文献

1
State-space models' dirty little secrets: even simple linear Gaussian models can have estimation problems.状态空间模型鲜为人知的秘密:即使是简单的线性高斯模型也可能存在估计问题。
Sci Rep. 2016 May 25;6:26677. doi: 10.1038/srep26677.
2
Long short-term memory.长短期记忆
Neural Comput. 1997 Nov 15;9(8):1735-80. doi: 10.1162/neco.1997.9.8.1735.