• 文献检索
  • 文档翻译
  • 深度研究
  • 学术资讯
  • Suppr Zotero 插件Zotero 插件
  • 邀请有礼
  • 套餐&价格
  • 历史记录
应用&插件
Suppr Zotero 插件Zotero 插件浏览器插件Mac 客户端Windows 客户端微信小程序
定价
高级版会员购买积分包购买API积分包
服务
文献检索文档翻译深度研究API 文档MCP 服务
关于我们
关于 Suppr公司介绍联系我们用户协议隐私条款
关注我们

Suppr 超能文献

核心技术专利:CN118964589B侵权必究
粤ICP备2023148730 号-1Suppr @ 2026

文献检索

告别复杂PubMed语法,用中文像聊天一样搜索,搜遍4000万医学文献。AI智能推荐,让科研检索更轻松。

立即免费搜索

文件翻译

保留排版,准确专业,支持PDF/Word/PPT等文件格式,支持 12+语言互译。

免费翻译文档

深度研究

AI帮你快速写综述,25分钟生成高质量综述,智能提取关键信息,辅助科研写作。

立即免费体验

二次互信息导数的直接估计及其在监督降维中的应用

Direct Estimation of the Derivative of Quadratic Mutual Information with Application in Supervised Dimension Reduction.

作者信息

Tangkaratt Voot, Sasaki Hiroaki, Sugiyama Masashi

机构信息

University of Tokyo, Bunkyo-ku, Tokyo, 113-033, Japan

Nara Institute of Science and Technology, Ikoma, Nara 630-0192, Japan, and RIKEN Center for Advanced Intelligence Project, Chuo-ku, Tokyo 103-0027, Japan

出版信息

Neural Comput. 2017 Aug;29(8):2076-2122. doi: 10.1162/NECO_a_00986. Epub 2017 Jun 9.

DOI:10.1162/NECO_a_00986
PMID:28599116
Abstract

A typical goal of linear-supervised dimension reduction is to find a low-dimensional subspace of the input space such that the projected input variables preserve maximal information about the output variables. The dependence-maximization approach solves the supervised dimension-reduction problem through maximizing a statistical dependence between projected input variables and output variables. A well-known statistical dependence measure is mutual information (MI), which is based on the Kullback-Leibler (KL) divergence. However, it is known that the KL divergence is sensitive to outliers. Quadratic MI (QMI) is a variant of MI based on the [Formula: see text] distance, which is more robust against outliers than the KL divergence, and a computationally efficient method to estimate QMI from data, least squares QMI (LSQMI), has been proposed recently. For these reasons, developing a supervised dimension-reduction method based on LSQMI seems promising. However, not QMI itself but the derivative of QMI is needed for subspace search in linear-supervised dimension reduction, and the derivative of an accurate QMI estimator is not necessarily a good estimator of the derivative of QMI. In this letter, we propose to directly estimate the derivative of QMI without estimating QMI itself. We show that the direct estimation of the derivative of QMI is more accurate than the derivative of the estimated QMI. Finally, we develop a linear-supervised dimension-reduction algorithm that efficiently uses the proposed derivative estimator and demonstrate through experiments that the proposed method is more robust against outliers than existing methods.

摘要

线性监督降维的一个典型目标是找到输入空间的一个低维子空间,使得投影后的输入变量保留关于输出变量的最大信息。依赖最大化方法通过最大化投影后的输入变量与输出变量之间的统计依赖性来解决监督降维问题。一种著名的统计依赖性度量是互信息(MI),它基于库尔贝克 - 莱布勒(KL)散度。然而,众所周知,KL散度对异常值敏感。二次互信息(QMI)是基于[公式:见原文]距离的互信息变体,它比KL散度对异常值更具鲁棒性,并且最近已经提出了一种从数据中估计QMI的计算高效方法,即最小二乘QMI(LSQMI)。基于这些原因,开发一种基于LSQMI的监督降维方法似乎很有前景。然而,在进行线性监督降维的子空间搜索时,需要的不是QMI本身而是QMI的导数,并且精确的QMI估计器的导数不一定是QMI导数的良好估计器。在这封信中,我们建议直接估计QMI的导数而无需估计QMI本身。我们表明,直接估计QMI的导数比估计的QMI的导数更准确。最后,我们开发了一种线性监督降维算法,该算法有效地使用了所提出的导数估计器,并通过实验证明所提出的方法比现有方法对异常值更具鲁棒性。

相似文献

1
Direct Estimation of the Derivative of Quadratic Mutual Information with Application in Supervised Dimension Reduction.二次互信息导数的直接估计及其在监督降维中的应用
Neural Comput. 2017 Aug;29(8):2076-2122. doi: 10.1162/NECO_a_00986. Epub 2017 Jun 9.
2
Sufficient dimension reduction via squared-loss mutual information estimation.通过平方损失互信息估计进行充分降维。
Neural Comput. 2013 Mar;25(3):725-58. doi: 10.1162/NECO_a_00407. Epub 2012 Dec 28.
3
Direct Density Derivative Estimation.直接密度导数估计
Neural Comput. 2016 Jun;28(6):1101-40. doi: 10.1162/NECO_a_00835. Epub 2016 May 3.
4
Estimating optimal feature subsets using efficient estimation of high-dimensional mutual information.使用高维互信息的有效估计来估计最优特征子集。
IEEE Trans Neural Netw. 2005 Jan;16(1):213-24. doi: 10.1109/TNN.2004.841414.
5
Sufficient Dimension Reduction via Direct Estimation of the Gradients of Logarithmic Conditional Densities.通过对数条件密度梯度的直接估计实现充分降维
Neural Comput. 2018 Feb;30(2):477-504. doi: 10.1162/neco_a_01035. Epub 2017 Nov 21.
6
An estimating equation approach to dimension reduction for longitudinal data.一种用于纵向数据降维的估计方程方法。
Biometrika. 2016 Mar;103(1):189-203. doi: 10.1093/biomet/asv066. Epub 2016 Feb 16.
7
Adaptive Learning for Robust Radial Basis Function Networks.用于鲁棒径向基函数网络的自适应学习
IEEE Trans Cybern. 2021 May;51(5):2847-2856. doi: 10.1109/TCYB.2019.2951811. Epub 2021 Apr 15.
8
Geometric mean for subspace selection.用于子空间选择的几何均值。
IEEE Trans Pattern Anal Mach Intell. 2009 Feb;31(2):260-74. doi: 10.1109/TPAMI.2008.70.
9
Avoiding Optimal Mean ℓ-Norm Maximization-Based Robust PCA for Reconstruction.避免基于最优平均ℓ范数最大化的鲁棒主成分分析进行重建。
Neural Comput. 2017 Apr;29(4):1124-1150. doi: 10.1162/NECO_a_00937. Epub 2017 Jan 17.
10
Graph embedded nonparametric mutual information for supervised dimensionality reduction.基于图嵌入的非参数互信息监督降维方法。
IEEE Trans Neural Netw Learn Syst. 2015 May;26(5):951-63. doi: 10.1109/TNNLS.2014.2329240.

引用本文的文献

1
Kernel Density Estimation of Electromyographic Signals and Ensemble Learning for Highly Accurate Classification of a Large Set of Hand/Wrist Motions.肌电信号的核密度估计与集成学习用于大量手部/腕部动作的高精度分类
Front Neurosci. 2022 Mar 9;16:796711. doi: 10.3389/fnins.2022.796711. eCollection 2022.