• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

相似文献

1
Deep network embedding with dimension selection.深度网络嵌入与维度选择。
Neural Netw. 2024 Nov;179:106512. doi: 10.1016/j.neunet.2024.106512. Epub 2024 Jul 11.
2
Layer adaptive node selection in Bayesian neural networks: Statistical guarantees and implementation details.贝叶斯神经网络中的层自适应节点选择:统计保证与实现细节。
Neural Netw. 2023 Oct;167:309-330. doi: 10.1016/j.neunet.2023.08.029. Epub 2023 Aug 22.
3
Markov chain stochastic DCA and applications in deep learning with PDEs regularization.马尔可夫链随机 DCA 及其在 PDEs 正则化深度学习中的应用。
Neural Netw. 2024 Feb;170:149-166. doi: 10.1016/j.neunet.2023.11.032. Epub 2023 Nov 13.
4
Efficient Markov chain Monte Carlo methods for decoding neural spike trains.高效的马尔可夫链蒙特卡罗方法用于解码神经尖峰序列。
Neural Comput. 2011 Jan;23(1):46-96. doi: 10.1162/NECO_a_00059. Epub 2010 Oct 21.
5
Learning Deep Generative Models With Doubly Stochastic Gradient MCMC.使用双随机梯度马尔可夫链蒙特卡罗学习深度生成模型
IEEE Trans Neural Netw Learn Syst. 2018 Jul;29(7):3084-3096. doi: 10.1109/TNNLS.2017.2688499. Epub 2017 Jun 28.
6
Deep semi-supervised learning via dynamic anchor graph embedding in latent space.基于潜在空间动态锚图嵌入的深度半监督学习。
Neural Netw. 2022 Feb;146:350-360. doi: 10.1016/j.neunet.2021.11.026. Epub 2021 Dec 1.
7
Exploration of chemical space with partial labeled noisy student self-training and self-supervised graph embedding.利用部分标记的噪声学生自训练和自监督图嵌入探索化学空间。
BMC Bioinformatics. 2022 May 2;23(Suppl 3):158. doi: 10.1186/s12859-022-04681-3.
8
Stochastic gradient Langevin dynamics with adaptive drifts.具有自适应漂移的随机梯度朗之万动力学
J Stat Comput Simul. 2022;92(2):318-336. doi: 10.1080/00949655.2021.1958812. Epub 2021 Jul 27.
9
Posterior estimation using deep learning: a simulation study of compartmental modeling in dynamic positron emission tomography.深度学习的后验估计:动态正电子发射断层扫描中隔室建模的模拟研究。
Med Phys. 2023 Mar;50(3):1539-1548. doi: 10.1002/mp.16078. Epub 2022 Nov 18.
10
MfeCNN: Mixture Feature Embedding Convolutional Neural Network for Data Mapping.MfeCNN:用于数据映射的混合特征嵌入卷积神经网络。
IEEE Trans Nanobioscience. 2018 Jul;17(3):165-171. doi: 10.1109/TNB.2018.2841053. Epub 2018 May 28.

本文引用的文献

1
Consistent Sparse Deep Learning: Theory and Computation.一致稀疏深度学习:理论与计算
J Am Stat Assoc. 2022;117(540):1981-1995. doi: 10.1080/01621459.2021.1895175. Epub 2021 Apr 20.
2
Stochastic gradient Langevin dynamics with adaptive drifts.具有自适应漂移的随机梯度朗之万动力学
J Stat Comput Simul. 2022;92(2):318-336. doi: 10.1080/00949655.2021.1958812. Epub 2021 Jul 27.
3
Principled approach to the selection of the embedding dimension of networks.基于原则的网络嵌入维度选择方法。
Nat Commun. 2021 Jun 18;12(1):3772. doi: 10.1038/s41467-021-23795-5.
4
An Adaptive Empirical Bayesian Method for Sparse Deep Learning.一种用于稀疏深度学习的自适应经验贝叶斯方法。
Adv Neural Inf Process Syst. 2019 Dec;2019:5563-5573.
5
The impossibility of low-rank representations for triangle-rich complex networks.三角形丰富的复杂网络中低秩表示的不可能性。
Proc Natl Acad Sci U S A. 2020 Mar 17;117(11):5631-5637. doi: 10.1073/pnas.1911030117. Epub 2020 Mar 2.
6
Bayesian Neural Networks for Selection of Drug Sensitive Genes.用于选择药物敏感基因的贝叶斯神经网络
J Am Stat Assoc. 2018;113(523):955-972. doi: 10.1080/01621459.2017.1409122. Epub 2018 Jun 28.
7
An imputation-regularized optimization algorithm for high dimensional missing data problems and beyond.一种用于高维缺失数据问题及其他问题的插补正则化优化算法。
J R Stat Soc Series B Stat Methodol. 2018 Nov;80(5):899-926. doi: 10.1111/rssb.12279. Epub 2018 Jun 25.
8
node2vec: Scalable Feature Learning for Networks.节点2向量:网络的可扩展特征学习
KDD. 2016 Aug;2016:855-864. doi: 10.1145/2939672.2939754.
9
Multivariate sparse group lasso for the multivariate multiple linear regression with an arbitrary group structure.具有任意组结构的多元多重线性回归的多元稀疏组套索法。
Biometrics. 2015 Jun;71(2):354-63. doi: 10.1111/biom.12292. Epub 2015 Mar 2.
10
Community structure in social and biological networks.社会和生物网络中的群落结构。
Proc Natl Acad Sci U S A. 2002 Jun 11;99(12):7821-6. doi: 10.1073/pnas.122653799.

深度网络嵌入与维度选择。

Deep network embedding with dimension selection.

机构信息

Department of Statistics, Purdue University, West Lafayette, IN 47907, United States of America.

Department of Statistics, Purdue University, West Lafayette, IN 47907, United States of America.

出版信息

Neural Netw. 2024 Nov;179:106512. doi: 10.1016/j.neunet.2024.106512. Epub 2024 Jul 11.

DOI:10.1016/j.neunet.2024.106512
PMID:39032394
原文链接:https://pmc.ncbi.nlm.nih.gov/articles/PMC11408115/
Abstract

Network embedding is a general-purpose machine learning technique that converts network data from non-Euclidean space to Euclidean space, facilitating downstream analyses for the networks. However, existing embedding methods are often optimization-based, with the embedding dimension determined in a heuristic or ad hoc way, which can cause potential bias in downstream statistical inference. Additionally, existing deep embedding methods can suffer from a nonidentifiability issue due to the universal approximation power of deep neural networks. We address these issues within a rigorous statistical framework. We treat the embedding vectors as missing data, reconstruct the network features using a sparse decoder, and simultaneously impute the embedding vectors and train the sparse decoder using an adaptive stochastic gradient Markov chain Monte Carlo (MCMC) algorithm. Under mild conditions, we show that the sparse decoder provides a parsimonious mapping from the embedding space to network features, enabling effective selection of the embedding dimension and overcoming the nonidentifiability issue encountered by existing deep embedding methods. Furthermore, we show that the embedding vectors converge weakly to a desired posterior distribution in the 2-Wasserstein distance, addressing the potential bias issue experienced by existing embedding methods. This work lays down the first theoretical foundation for network embedding within the framework of missing data imputation.

摘要

网络嵌入是一种通用的机器学习技术,它将非欧几里得空间的网络数据转换到欧几里得空间,便于对网络进行下游分析。然而,现有的嵌入方法通常是基于优化的,嵌入维度以启发式或特别的方式确定,这可能会导致下游统计推断中的潜在偏差。此外,由于深度神经网络的通用逼近能力,现有的深度嵌入方法可能会存在不可识别性问题。我们在严格的统计框架内解决这些问题。我们将嵌入向量视为缺失数据,使用稀疏解码器重建网络特征,并同时使用自适应随机梯度马尔可夫链蒙特卡罗(MCMC)算法对嵌入向量进行推断和训练稀疏解码器。在温和的条件下,我们表明稀疏解码器提供了从嵌入空间到网络特征的简约映射,从而有效地选择嵌入维度,并克服了现有深度嵌入方法遇到的不可识别性问题。此外,我们表明,在 2-Wasserstein 距离下,嵌入向量弱收敛到期望的后验分布,解决了现有嵌入方法中存在的潜在偏差问题。这项工作为缺失数据推断框架内的网络嵌入奠定了第一个理论基础。