• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

FET-LM: Flow-Enhanced Variational Autoencoder for Topic-Guided Language Modeling.

作者信息

Tu Haoqin, Yang Zhongliang, Yang Jinshuai, Zhou Linna, Huang Yongfeng

出版信息

IEEE Trans Neural Netw Learn Syst. 2024 Aug;35(8):11180-11193. doi: 10.1109/TNNLS.2023.3249253. Epub 2024 Aug 5.

DOI:10.1109/TNNLS.2023.3249253
PMID:37028337
Abstract

Variational autoencoder (VAE) is widely used in tasks of unsupervised text generation due to its potential of deriving meaningful latent spaces, which, however, often assumes that the distribution of texts follows a common yet poor-expressed isotropic Gaussian. In real-life scenarios, sentences with different semantics may not follow simple isotropic Gaussian. Instead, they are very likely to follow a more intricate and diverse distribution due to the inconformity of different topics in texts. Considering this, we propose a flow-enhanced VAE for topic-guided language modeling (FET-LM). The proposed FET-LM models topic and sequence latent separately, and it adopts a normalized flow composed of householder transformations for sequence posterior modeling, which can better approximate complex text distributions. FET-LM further leverages a neural latent topic component by considering learned sequence knowledge, which not only eases the burden of learning topic without supervision but also guides the sequence component to coalesce topic information during training. To make the generated texts more correlative to topics, we additionally assign the topic encoder to play the role of a discriminator. Encouraging results on abundant automatic metrics and three generation tasks demonstrate that the FET-LM not only learns interpretable sequence and topic representations but also is fully capable of generating high-quality paragraphs that are semantically consistent.

摘要

相似文献

1
FET-LM: Flow-Enhanced Variational Autoencoder for Topic-Guided Language Modeling.
IEEE Trans Neural Netw Learn Syst. 2024 Aug;35(8):11180-11193. doi: 10.1109/TNNLS.2023.3249253. Epub 2024 Aug 5.
2
A Transformer-Based Hierarchical Variational AutoEncoder Combined Hidden Markov Model for Long Text Generation.一种基于Transformer的分层变分自编码器联合隐马尔可夫模型用于长文本生成。
Entropy (Basel). 2021 Sep 29;23(10):1277. doi: 10.3390/e23101277.
3
Variational inference with Gaussian mixture model and householder flow.变分推断与高斯混合模型和豪斯霍尔德流。
Neural Netw. 2019 Jan;109:43-55. doi: 10.1016/j.neunet.2018.10.002. Epub 2018 Oct 17.
4
Investigating the Efficient Use of Word Embedding with Neural-Topic Models for Interpretable Topics from Short Texts.研究基于神经主题模型的词向量有效利用,以实现短文本的可解释主题。
Sensors (Basel). 2022 Jan 23;22(3):852. doi: 10.3390/s22030852.
5
Hierarchical and Self-Attended Sequence Autoencoder.分层和自关注序列自动编码器。
IEEE Trans Pattern Anal Mach Intell. 2022 Sep;44(9):4975-4986. doi: 10.1109/TPAMI.2021.3068187. Epub 2022 Aug 4.
6
Deep clustering analysis via variational autoencoder with Gamma mixture latent embeddings.基于具有伽马混合潜在嵌入的变分自编码器的深度聚类分析。
Neural Netw. 2025 Mar;183:106979. doi: 10.1016/j.neunet.2024.106979. Epub 2024 Dec 4.
7
Multi-mode non-Gaussian variational autoencoder network with missing sources for anomaly detection of complex electromechanical equipment.用于复杂机电设备异常检测的具有缺失源的多模态非高斯变分自编码器网络
ISA Trans. 2023 Mar;134:144-158. doi: 10.1016/j.isatra.2022.09.009. Epub 2022 Sep 12.
8
Combining Knowledge Graph and Word Embeddings for Spherical Topic Modeling.结合知识图谱和词嵌入进行球形主题建模。
IEEE Trans Neural Netw Learn Syst. 2023 Jul;34(7):3609-3623. doi: 10.1109/TNNLS.2021.3112045. Epub 2023 Jul 6.
9
Modeling Topics in DFA-Based Lemmatized Gujarati Text.基于 DFA 的词形还原 Gujarati 文本中的主题建模。
Sensors (Basel). 2023 Mar 1;23(5):2708. doi: 10.3390/s23052708.
10
A multimodal dynamical variational autoencoder for audiovisual speech representation learning.一种用于视听语音表示学习的多模态动态变分自编码器。
Neural Netw. 2024 Apr;172:106120. doi: 10.1016/j.neunet.2024.106120. Epub 2024 Jan 11.

引用本文的文献

1
BERTopic_Teen: a multi-module optimization approach for short text topic modeling in adolescent health.BERTopic_Teen:一种用于青少年健康领域短文本主题建模的多模块优化方法。
Front Public Health. 2025 Aug 12;13:1608241. doi: 10.3389/fpubh.2025.1608241. eCollection 2025.