• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

分层和自关注序列自动编码器。

Hierarchical and Self-Attended Sequence Autoencoder.

出版信息

IEEE Trans Pattern Anal Mach Intell. 2022 Sep;44(9):4975-4986. doi: 10.1109/TPAMI.2021.3068187. Epub 2022 Aug 4.

DOI:10.1109/TPAMI.2021.3068187
PMID:33755556
Abstract

It is important and challenging to infer stochastic latent semantics for natural language applications. The difficulty in stochastic sequential learning is caused by the posterior collapse in variational inference. The input sequence is disregarded in the estimated latent variables. This paper proposes three components to tackle this difficulty and build the variational sequence autoencoder (VSAE) where sufficient latent information is learned for sophisticated sequence representation. First, the complementary encoders based on a long short-term memory (LSTM) and a pyramid bidirectional LSTM are merged to characterize global and structural dependencies of an input sequence, respectively. Second, a stochastic self attention mechanism is incorporated in a recurrent decoder. The latent information is attended to encourage the interaction between inference and generation in an encoder-decoder training procedure. Third, an autoregressive Gaussian prior of latent variable is used to preserve the information bound. Different variants of VSAE are proposed to mitigate the posterior collapse in sequence modeling. A series of experiments are conducted to demonstrate that the proposed individual and hybrid sequence autoencoders substantially improve the performance for variational sequential learning in language modeling and semantic understanding for document classification and summarization.

摘要

对于自然语言应用来说,推断随机潜在语义是重要且具有挑战性的。随机序列学习的难点在于变分推断中的后验崩溃。在估计的潜在变量中忽略了输入序列。本文提出了三个组件来解决这个困难,并构建了变分序列自动编码器(VSAE),在该模型中可以学习到足够的潜在信息,从而实现复杂的序列表示。首先,基于长短期记忆(LSTM)和金字塔双向 LSTM 的互补编码器分别用于描述输入序列的全局和结构依赖关系。其次,在递归解码器中加入了随机自注意力机制。潜在信息被关注,以鼓励编码器-解码器训练过程中的推理和生成之间的交互。第三,使用潜在变量的自回归高斯先验来保留信息边界。提出了不同的 VSAE 变体来减轻序列建模中的后验崩溃。进行了一系列实验,以证明所提出的单个和混合序列自动编码器在语言建模中的变分序列学习和语义理解以及文档分类和摘要中的性能有了显著提高。

相似文献

1
Hierarchical and Self-Attended Sequence Autoencoder.分层和自关注序列自动编码器。
IEEE Trans Pattern Anal Mach Intell. 2022 Sep;44(9):4975-4986. doi: 10.1109/TPAMI.2021.3068187. Epub 2022 Aug 4.
2
Learning Hierarchical Variational Autoencoders With Mutual Information Maximization for Autoregressive Sequence Modeling.通过互信息最大化学习用于自回归序列建模的分层变分自编码器
IEEE Trans Pattern Anal Mach Intell. 2023 Feb;45(2):1949-1962. doi: 10.1109/TPAMI.2022.3160509. Epub 2023 Jan 6.
3
Attention Autoencoder for Generative Latent Representational Learning in Anomaly Detection.注意自编码器在异常检测中的生成潜在表示学习。
Sensors (Basel). 2021 Dec 24;22(1):123. doi: 10.3390/s22010123.
4
Translating medical image to radiological report: Adaptive multilevel multi-attention approach.将医学图像翻译为放射报告:自适应多级多关注方法。
Comput Methods Programs Biomed. 2022 Jun;221:106853. doi: 10.1016/j.cmpb.2022.106853. Epub 2022 May 4.
5
Improving Chemical Autoencoder Latent Space and Molecular Generation Diversity with Heteroencoders.用异构图编码器改进化学自动编码器潜在空间和分子生成多样性。
Biomolecules. 2018 Oct 30;8(4):131. doi: 10.3390/biom8040131.
6
DyVGRNN: DYnamic mixture Variational Graph Recurrent Neural Networks.DyVGRNN:动态混合变分图递归神经网络。
Neural Netw. 2023 Aug;165:596-610. doi: 10.1016/j.neunet.2023.05.048. Epub 2023 Jun 5.
7
An informative dual ForkNet for video anomaly detection.一种用于视频异常检测的信息丰富的双 ForkNet。
Neural Netw. 2024 Nov;179:106509. doi: 10.1016/j.neunet.2024.106509. Epub 2024 Jul 11.
8
Deep Latent-Variable Kernel Learning.深度潜在变量核学习。
IEEE Trans Cybern. 2022 Oct;52(10):10276-10289. doi: 10.1109/TCYB.2021.3062140. Epub 2022 Sep 19.
9
An LSTM-based adversarial variational autoencoder framework for self-supervised neural decoding of behavioral choices.基于 LSTM 的对抗变分自动编码器框架,用于行为选择的自监督神经解码。
J Neural Eng. 2024 Jul 9;21(3). doi: 10.1088/1741-2552/ad3eb3.
10
A stochastic variational framework for Recurrent Gaussian Processes models.循环高斯过程模型的随机变分框架。
Neural Netw. 2019 Apr;112:54-72. doi: 10.1016/j.neunet.2019.01.005. Epub 2019 Feb 1.