• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

通过对数条件密度梯度的直接估计实现充分降维

Sufficient Dimension Reduction via Direct Estimation of the Gradients of Logarithmic Conditional Densities.

作者信息

Sasaki Hiroaki, Tangkaratt Voot, Niu Gang, Sugiyama Masashi

机构信息

Graduate School of Information Science, Nara Institute of Science and Technology, Nara 630-0192, Japan

Center for Advanced Intelligence Project, RIKEN, Tokyo 103-0027, Japan

出版信息

Neural Comput. 2018 Feb;30(2):477-504. doi: 10.1162/neco_a_01035. Epub 2017 Nov 21.

DOI:10.1162/neco_a_01035
PMID:29162006
Abstract

Sufficient dimension reduction (SDR) is aimed at obtaining the low-rank projection matrix in the input space such that information about output data is maximally preserved. Among various approaches to SDR, a promising method is based on the eigendecomposition of the outer product of the gradient of the conditional density of output given input. In this letter, we propose a novel estimator of the gradient of the logarithmic conditional density that directly fits a linear-in-parameter model to the true gradient under the squared loss. Thanks to this simple least-squares formulation, its solution can be computed efficiently in a closed form. Then we develop a new SDR method based on the proposed gradient estimator. We theoretically prove that the proposed gradient estimator, as well as the SDR solution obtained from it, achieves the optimal parametric convergence rate. Finally, we experimentally demonstrate that our SDR method compares favorably with existing approaches in both accuracy and computational efficiency on a variety of artificial and benchmark data sets.

摘要

充分降维(SDR)旨在在输入空间中获得低秩投影矩阵,以便最大程度地保留有关输出数据的信息。在各种SDR方法中,一种很有前景的方法是基于给定输入时输出条件密度梯度的外积的特征分解。在这封信中,我们提出了一种对数条件密度梯度的新型估计器,该估计器在平方损失下将参数线性模型直接拟合到真实梯度。由于这种简单的最小二乘公式,其解可以以封闭形式有效地计算出来。然后,我们基于所提出的梯度估计器开发了一种新的SDR方法。我们从理论上证明,所提出的梯度估计器以及从中获得的SDR解实现了最优的参数收敛速度。最后,我们通过实验证明,在各种人工数据集和基准数据集上,我们的SDR方法在准确性和计算效率方面均优于现有方法。

相似文献

1
Sufficient Dimension Reduction via Direct Estimation of the Gradients of Logarithmic Conditional Densities.通过对数条件密度梯度的直接估计实现充分降维
Neural Comput. 2018 Feb;30(2):477-504. doi: 10.1162/neco_a_01035. Epub 2017 Nov 21.
2
Sufficient dimension reduction via squared-loss mutual information estimation.通过平方损失互信息估计进行充分降维。
Neural Comput. 2013 Mar;25(3):725-58. doi: 10.1162/NECO_a_00407. Epub 2012 Dec 28.
3
Direct Estimation of the Derivative of Quadratic Mutual Information with Application in Supervised Dimension Reduction.二次互信息导数的直接估计及其在监督降维中的应用
Neural Comput. 2017 Aug;29(8):2076-2122. doi: 10.1162/NECO_a_00986. Epub 2017 Jun 9.
4
Conditional density estimation with dimensionality reduction via squared-loss conditional entropy minimization.通过平方损失条件熵最小化进行降维的条件密度估计。
Neural Comput. 2015 Jan;27(1):228-54. doi: 10.1162/NECO_a_00683.
5
FUNCTIONAL SUFFICIENT DIMENSION REDUCTION THROUGH AVERAGE FRÉCHET DERIVATIVES.通过平均弗雷歇导数进行功能充分降维
Ann Stat. 2022 Apr;50(2):904-929. doi: 10.1214/21-aos2131. Epub 2022 Apr 7.
6
Model-based policy gradients with parameter-based exploration by least-squares conditional density estimation.基于最小二乘条件密度估计的基于模型的策略梯度与基于参数的探索。
Neural Netw. 2014 Sep;57:128-40. doi: 10.1016/j.neunet.2014.06.006. Epub 2014 Jun 21.
7
Direct Density Derivative Estimation.直接密度导数估计
Neural Comput. 2016 Jun;28(6):1101-40. doi: 10.1162/NECO_a_00835. Epub 2016 May 3.
8
A Generally Efficient Targeted Minimum Loss Based Estimator based on the Highly Adaptive Lasso.一种基于高度自适应套索的一般有效基于靶向最小损失的估计器。
Int J Biostat. 2017 Oct 12;13(2):/j/ijb.2017.13.issue-2/ijb-2015-0097/ijb-2015-0097.xml. doi: 10.1515/ijb-2015-0097.
9
Sufficient dimension reduction via random-partitions for the large-p-small-n problem.针对高维小样本问题,通过随机划分实现充分降维。
Biometrics. 2019 Mar;75(1):245-255. doi: 10.1111/biom.12926. Epub 2018 Jul 27.
10
Logarithmic learning for generalized classifier neural network.广义分类器神经网络的对数学习
Neural Netw. 2014 Dec;60:133-40. doi: 10.1016/j.neunet.2014.08.004. Epub 2014 Aug 19.